Background Image
Previous Page  99 / 552 Next Page
Information
Show Menu
Previous Page 99 / 552 Next Page
Page Background

INFORMS Philadelphia – 2015

97

We present a partition-and-bound method for multistage adaptive mixed integer

optimization problems that extends previous work on finite adaptability. The

method analyzes the solution to a non-adaptive problem to determine how the

uncertainty set is restricting the objective, and uses this to partition the set. A

lower bound is calculated, and the partitioning is repeated until a desired gap is

reached. We demonstrate in computational experiments that the method

produces good solutions quickly.

2 - The Role of Robustness in Modern Statistical Regression

Martin Copenhaver, MIT, 77 Massachusetts Avenue, E40-135,

Cambridge, MA, 02139, United States of America,

mcopen@mit.edu,

Dimitris Bertsimas

Sparsity is a key driver in modern statistical estimation problems, yet reliably

sparse solutions remain elusive. Despite this, many regularization methods often

perform well in the face of noise in the data. In the domains of linear, median,

and matrix regression, we characterize precisely when regularization problems

correspond to simple robust optimization problems. In doing so, we contend that

it is robustness, not sparsity, which is critical to the success of modern statistical

methods.

3 - Robust Ambulance Deployment

Yee Sian Ng, Massachusetts Institute of Technology,

77 Massachusetts Ave, Cambridge, MA, 02139, United States of

America,

yeesian@mit.edu

, Dimitris Bertsimas

In Emergency Medical Systems (EMS), operators deploy a fleet of ambulances to

a set of locations before dispatching them in response to emergency calls. The goal

is to minimize the fraction of calls with late response times, which we formulate

as a two-stage Robust Optimization model with integer recourse. We propose a

practical algorithm for solving the model, and demonstrate the benefits of this

approach in a realistic case study of the EMS in Washington DC.

SC15

15-Franklin 5, Marriott

Nonlinear Optimization Methods with Stochastic and

Second Order Information

Sponsor: Optimization/Nonlinear Programming

Sponsored Session

Chair: Katya Scheinberg, Lehigh University, 200 W Packer ave,

Bethlehem, PA, 18015, United States of America,

katyas@lehigh.edu

1 - Stochastic Optimization using a Trust-region Method and

Random Models

Matt Menickelly, Lehigh University, 200 W. Packer Ave.,

Bethlehem, PA, 18015, United States of America,

mjm412@lehigh.edu,

Katya Scheinberg, Ruobing Chen

We propose and analyze a trust-region model-based method for solving

unconstrained stochastic optimization problems. The convergence analysis relies

only on requirements that random models and point estimates are sufficiently

accurate with sufficiently high, but fixed, probability. We present examples of

generating sufficiently accurate random models under different noise assumptions

and compare with existing approaches based on sample averaging or stochastic

gradients.

2 - Accelerated Proximal Quasi-newton Algorithm with Global

Complexity Analysis

Hiva Ghanbari, Lehigh University, Harold S.Mohler Laboratory,,

200 W. Packer Ave., Bethlehem, PA, 18015, United States of

America,

hig213@lehigh.edu

, Katya Scheinberg

We propose a proximal quasi-newton algorithm in the spirit of the Nesterov’s

accelerated scheme for solving convex composite optimization problems. We

present the analysis reflecting the convergence properties of the proposed

algorithm, which preserves the sublinear rate of convergence under some mild

assumptions. Furthermore, we show that, under careful use of second order

information, this method obtains the same rate of convergence of Accelerated

Proximal Gradient Algorithm.

3 - Selective Linearization Algorithm for Multi-block

Convex Optimization

Yu Du, Rutgers University, 100 Rockafeller Road, Rutgers

business school, Piscataway, NJ, 08854, United States of America,

duyu197@gmail.com

, Andrzej Ruszczynski, Xiaodong Lin

We consider the problem of minimizing the multi-block structured convex non-

smooth functions. We introduce the Selective Linearization(SLIN) Algorithm

which iteratively solves a series of subproblems by linearizing some blocks and

approaches the optimal solution. Global convergence is achieved and SLIN

algorithm is proven to converge at rate of O(1/k). We apply the SLIN algorithm

on structured regularization applications, showing fast running time for large

scale problems.

4 - Quasi-newton Methods for Nonsmooth Optimization

Nitish Shirish Keskar, Northwestern University, 2145 Sheridan

Road, Room C210, Evanston, IL, 60208, United States of

America,

keskar.nitish@gmail.com

, Jorge Nocedal,

Andreas Waechter

We study the behavior of quasi-Newton methods on large-scale nonsmooth

optimization problems. We concentrate specifically on a modified BFGS approach

and its limited memory implementation. We contrast this approach with other

popular methods including Limited Memory Bundle Methods and Gradient

Sampling methods and describe detailed numerical results from both academic

and practical test problems. Finally, we briefly provide some new theoretical

guarantees for this class of methods.

SC16

16-Franklin 6, Marriott

Efficient Algorithms for Large-Scale Convex Games

Sponsor: Optimization/Linear and Conic Optimization

Sponsored Session

Chair: Fatma Kilinc Karzan, Assistant Professor Of Operations Research,

Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA,

15213, United States of America,

fkilinc@andrew.cmu.edu

Co-Chair: Christian Kroer, PhD Student, Carnegie Mellon University,

5000 Forbes Ave, Pittsburgh, PA, 15213, United States of America,

ckroer@cs.cmu.edu

1 - Games People (could not) Play

Swati Gupta, MIT, 77 Massachusetts Avenue, E40-149,

Cambridge, MA, 02139, United States of America,

swatig@mit.edu

Every two-player zero-sum game has an optimal strategy that can be found by

solving an LP. But this approach is not polynomial time when the number of pure

strategies for each player is exponential in the input, e.g. each player plays

spanning trees of a graph. We present fast algorithms to compute Nash-equilibria

for these games for bilinear payoff functions using ideas from convex and

combinatorial optimization and machine learning.

2 - Maximal Cooperation in Repeated Games on Social Networks

Catherine Moon, Duke University, 213 Social Sciences,

Box 90097, Durham, NC, 27705, United States of America,

csm17@duke.edu,

Vincent Conitzer

Standard results on repeated games assume that defections are instantly

observable. In reality, it may take some time for agents in a network to learn that

a defection has occurred. How does this affect the structure of equilibria and

algorithms for computing them? In games of cooperation and defection, we prove

existence of a unique maximal set of forever-cooperating agents in equilibrium,

give an efficient algorithm for computing it and a condition for when the

equilibrium found is credible.

3 - Faster First-order Methods for Extensive-form Game Solving

Christian Kroer, PhD Student, Carnegie Mellon University, 5000

Forbes Ave, Pittsburgh, PA, 15213, United States of America,

ckroer@cs.cmu.edu

, Kevin Waugh, Fatma Kilinc Karzan, Tuomas

Sandholm

We investigate first-order methods for Nash equilibrium computation in zero-sum

sequential games. We introduce the dilated entropy function as a distance-

generating function, and develop strong-convexity bounds on this function over

convex polytopes that generalize sequential-game strategy spaces. In terms of

game solving, this improves the convergence rate of several first-order methods.

We instantiate these methods with our results, and show that our algorithms

outperform prior algorithms.

4 - Simultaneous Abstraction and Equilibrium Finding in Games

Noam Brown, Carnegie Mellon University, 5000 Forbes Ave,

Pittsburgh, PA, 15213, United States of America,

noamb@andrew.cmu.edu,

Tuomas Sandholm

The leading approach to solving infeasibly large imperfect-information extensive-

form games is to solve a smaller abstract version of the game and then map the

solution back to the original. However, it is difficult to know which actions should

be included in the abstraction without first solving the game. We introduce a

method that combines abstraction with equilibrium finding by enabling actions to

be added to the abstraction at run time in points that the current abstraction

deems important.

SC16