Table of Contents Table of Contents
Previous Page  400 / 561 Next Page
Information
Show Menu
Previous Page 400 / 561 Next Page
Page Background

INFORMS Nashville – 2016

400

2 - Detecting Anomalous Patterns Of Care Using Health

Insurance Claims

Satya Venkata Somanchi, University of Notre Dame, 344 Mendoza

College of Business, Notre Dame, IN, 46556, United States,

somanchi.1@nd.edu

, Edward McFowland

In this work, we propose methods for detecting heterogeneous treatment effects

in observational data. We apply these methods to the patient health insurance

claims to improve clinical practice by analyzing patterns across patients and

providing actionable insights. Our goal is to analyze this complex patient care data

in order to identify interesting patterns in patient care that have led to anomalous

health outcomes. Specifically, we detect treatments in the outpatient patient care

that have significantly deviated from the regular treatment process and have

affected health outcomes either negatively or positively. This can further help

improve patient health and reduce health care costs.

3 - FastMemory-efficient Anomaly Detection In Streaming

Heterogenous Graphs

Emaad Ahmed Manzoor, Carnegie Mellon University, Pittsburgh,

PA, 19106, United States,

emaad@cmu.edu

We present StreamSpot, a method to continuously track anomalous graphs as

they evolve from a stream of edges with node and edge types. We introduce a

graph similarity function that captures both node/edge types and temporal order,

and a constant-space sketch that can be maintained incrementally to approximate

this similarity in constant time. We show that when applied to detect malicious

software from a stream of system call traces of executing processes, StreamSpot

scales to over 10,000 edges/second and demonstrates an average precision of over

95%. Project website:

http://bit.ly/streamspot

4 - Sequential Goodness Of Fit Testing With False Discovery

Rate Control

Sangwon (Justin) Hyun, Carnegie Mellon, Pittsburgh, PA,

United States,

robohyun66@gmail.com

, Max G’Sell

We consider sequential goodness-of-fit testing for sequential model selection

procedures. This leads to a multiple hypothesis testing setting where the

hypotheses are ordered and one is only permitted to reject an initial contiguous

block, H_1, ..., H_k, of hypotheses. A rejection rule in this setting amounts to a

procedure for choosing the stopping point k, corresponding to a particular

selected model. We will discuss a multiple hypothesis testing procedure for FDR

control in this setting. We will also introduce recent results for goodness-of-fit

testing for clustering and the graphical lasso as an illustration of this approach.

WB16

105A-MCC

Optimal Learning and Optimization via Simulation

Sponsored: Optimization, Optimization Under Uncertainty

Sponsored Session

Chair: Peter Frazier, Cornell University, 410 Thurston Ave, Ithaca, NY,

United States,

pf98@cornell.edu

1 - Optimal Learning With Discrete Priors

Weidong Han, Princeton University,

whan@princeton.edu

We consider an optimal learning problem with a random parameter following a

discrete prior distribution. We formulate the problem into a dynamic program,

and propose several applicable polices. We consider both finite-horizon and

infinite-horizon problems, provide a lower bound on a well-known heuristic

policy in the former case, and prove asymptotic convergence properties of the

proposed policies in the latter case. We also present comparisons against an

optimal learning policy for selected problem instances. Empirical experiments

show that the proposed policies achieve near-optimal performances.

2 - Parallel Knowledge Gradient For Global Optimization Of

Expensive Functions

Jian Wu, Cornell University, Ithaca, NY, 14850, United States,

jw926@cornell.edu

, Peter Frazier

In this talk, we will introduce parallel knowledge gradient (pKG) algorithm for

batch Bayesian optimization. The method chooses to evaluate the one-step Bayes

optimal batch of points. We demonstrate empirically that the method can find

global minima significantly faster than previous batch Bayesian Optimization

methods on both synthetic functions and tuning hyperparameters of complex

machine learning algorithms. Especially, the method provides most value in noisy

setting.

3 - Secant Tangents-averaged Stochastic Approximation (STAR-SA)

Marie Chau, Virginia Commonwealth University,

mchau@vcu.edu

We present new theoretical results for Secant Tangents-AveRaged stochastic

approximation (STAR-SA) and extend it to higher dimensions using simultaneous

perturbation stochastic approximation. Under a setting where both direct and

indirect gradients are available, STAR-SA and STAR-SPSA (for the

multidimensional case) combine direct and indirect gradients using a convex

weight. We derive deterministic weights to minimize the MSE of the gradient and

of the estimate, both of which are lower than the classical Robbins-Monro and

Kiefer-Wolfowitz SA methods. We prove convergence, show asymptotic

normality, and investigate their empirical performance against well-known SA

algorithms.

4 - Optimal Learning For Nonlinear Parametric Belief Models With

Continuous Alternatives

Xinyu He, Princeton University, Princeton, NJ, United States,

xinyuhe@princeton.edu,

Warren B Powell

We consider the optimal learning problem for nonlinear parametric belief models,

where our goal is to find the optimal alternative through a series of noisy and

expensive measurements. This problem has been studied when the number of

alternatives is finite. Our work presents an algorithm that extends the optimal

learning framework to the case with continuous alternatives. Experiments show

that our algorithm on the continuous framework exhibits significant performance

improvements, especially in higher dimensions.

WB17

105B-MCC

Distributed Consensus Optimization

Sponsored: Optimization, Nonlinear Programming

Sponsored Session

Chair: Necdet Serhat Aybat, The Pennsylvania State University,

310 Leonhard Building, University Park, PA, 16802, United States,

nsa10@psu.edu

1 - DQM: Decentralized Quadratically Approximated Alternating

Direction Method Of Multipliers

Aryan Mokhtari, University of Pennsylvania, 200 South 33rd

Street, Room 203 Moore Building, Philadelphia, PA, 19104,

United States,

aryanm@seas.upenn.edu

, Wei Shi, Qing Ling,

Alejandro Ribeiro

The decentralized alternating method of multipliers (DADMM) is a well-

established iterative method for solving consensus optimization problems;

however, implementation of DADMM requires solving an optimization

subproblem at each iteration for each node which can be computationally costly.

We propose a decentralized quadratic approximation of ADMM (DQM) that

reduces the computational complexity of DADMM by minimizing a quadratic

approximation of the objective function. We show that DQM converges to the

optimal argument at a linear rate which is identical to the convergence rate of

DADMM. Moreover, as time passes the coefficient of linear convergence for DQM

approaches the one for DADMM.

2 - A Geometrically Convergent Method For Distributed Optimization

Over Time-varying And Directed Graphs

Wei Shi, Postdoctoral Research Associate, Coordinated Science

Laboratory, University of Illinois at Urbana-Champaign, Urbana,

IL, 61801, United States,

wilburs@illinois.edu,

Angelia Nedich,

Alex Olshevsky

In this presentation, we introduce a class of first-order algorithms, referred to as

DIGing and its variants, for distributed optimization over time-varying and/or

directed graphs. The algorithms employ fixed step-sizes and, yet, drives all the

agents’ iterates to a global and consensual minimizer. Under the strong convexity

assumption, we show that the algorithms all converge at some explicit global R-

linear (geometric) rates as long as the step-sizes are no greater than some explicit

bounds. We also show that the rate for DIGing scales polynomially in the number

of agents. Numerical experiments demonstrate the efficacy of the introduced

algorithms and validate our theoretical findings.

3 - Distributed Admm-like Methods For Cooperative Multi-agent

Optimization Over Conic Constraints

Erfan Yazdandoost Hamedani, Penn State University, Department

of Industrial Engineering, 310 Leonhard Bldg. University Park,

State College, PA, United States,

evy5047@psu.edu,

Necdet Serhat Aybat

We consider the sharing problem over an undirected (dynamic) network of

agents, where only those agents connected by an edge can directly communicate.

The objective is to minimize the sum of agent-specific composite convex functions

subject to a conic constraint coupling all agents’ local decisions. An ADMM-like

primal-dual distributed algorithm is proposed. We examine the convergence rate

in terms of suboptimality, and infeasibility; study the effect of underlying network

topology on the rates; and show how to extend this method to handle

communication networks with time-varying topology.

WB16