Background Image
Previous Page  72 / 552 Next Page
Information
Show Menu
Previous Page 72 / 552 Next Page
Page Background

INFORMS Philadelphia – 2015

70

SB14

14-Franklin 4, Marriott

Sampling and Learning Methods in Stochastic

Optimization

Sponsor: Optimization/Optimization Under Uncertainty

Sponsored Session

Chair: Peter Frazier, Assistant Professor, Cornell University, 232 Rhodes

Hall, Cornell University, Ithaca, NY, 14850, United States of America,

pf98@cornell.edu

1 - Variance Reduction Techniques for Sequential Sampling Methods

in Stochastic Programming

Jangho Park, PhD Student, The Ohio State University, 1971 Neil

Avenue, Columbus, OH, 43210, United States of America,

park.1814@osu.edu,

Guzin Bayraksan, Rebecca Stockbridge

We present variance reduction techniques, Antithetic Variates and Latin

Hypercube Sampling, when used for sequential sampling procedures in stochastic

programming. We discuss theoretical justification and present the resulting subtle

changes in an implementation due to theoretical consideration. We empirically

investigate these variance reduction techniques on a number of test problems

from the literature.

2 - Multi-step Bayesian Optimization for One-dimensional

Feasibility Determination

Massey Cashore, University of Waterloo, 77 Crossovers St,

Toronto, On, M4E 3X3, Canada,

jmcashor@uwaterloo.ca,

Peter Frazier

Bayesian optimization methods allocate limited sampling budgets to maximize

expensive-to-evaluate functions. One-step-lookahead policies are often used, but

computing optimal multi-step-lookahead policies remains a challenge. We

consider a specialized Bayesian optimization problem: finding the superlevel set of

an expensive one-dimensional function, with a Markov process prior. We

compute the Bayes-optimal sampling policy efficiently, and characterize the

suboptimality of one-step lookahead.

3 - The Knowledge Gradient Policy with Regularized Trees

Yan Li, Princeton, Sherrerd Hall, Charlton Street, Princeton,

United States of America,

yanli@princeton.edu,

Han Liu,

Warren Powell

We propose a sequential learning policy for noisy discrete global optimization

with regularized trees. Besides selecting the best alternative before the budget is

exhausted, we also aim to learn the important features and the low order

interactions. We derive a knowledge gradient policy for binary decision trees.

Experimental work shows that the method is scalable to high dimension and

efficiently learns the underlying low dimensional structures.

4 - The Knowledge Gradient with Logistic Belief Models for

Binary Classification

Yingfei Wang, Princeton University, Computer Science Dept.,

Princeton, NJ, 08540, United States of America,

yingfei@princeton.edu,

Warren Powell

We consider sequential decision problems where the learner takes an active role

in repeatedly selecting alternatives and receives the binary label of the selected

alternatives. The goal is to find the best alternative with the highest response. We

use logistic regression to model the response of each alternative. By formulating

the problem within a dynamic programming framework, we develop a

knowledge gradient policy to guide the experiment by maximizing the expected

value of information.

SB15

15-Franklin 5, Marriott

Recent Advances on First Order Methods

Sponsor: Optimization/Nonlinear Programming

Sponsored Session

Chair: Fatma Kilinc Karzan, Assistant Professor Of Operations Research,

Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA,

15213, United States of America,

fkilinc@andrew.cmu.edu

1 - Random Block-coordinate Gradient Projection Algorithms

Angelia Nedich, University of Illinois, Urbana IL,

United States of America,

angelia@illinois.edu,

Chandramani Singh, R. Srikant

We discuss gradient projection algorithms based on random partial updates of

decision variables. We analyze these algorithms with and without assuming

strong convexity of the objective functions. We also present an accelerated

version of the algorithm based on Nesterov’s two-step gradient method. We show

that the randomized algorithms exhibit similar rates of convergence as their full

gradient based deterministic counterparts.

2 - An Extended Frank-wolfe Method, with Applications to Low-rank

Matrix Completion

Paul Grigas, MIT Operations Research Center, 77 Massachusetts

Ave., Cambridge, MA, 02139,

pgrigas@mit.edu

, Rahul Mazumder,

Robert Freund

We present an extension of the Frank-Wolfe method that is designed to induce

near-optimal solutions on low-dimensional faces of the feasible region. We

present computational guarantees for the method that trade off efficiency in

computing near-optimal solutions with upper bounds on the dimension of

minimal faces of iterates. We present computational results for large-scale matrix

completion problems that demonstrate significant speed-ups in computing low-

rank near-optimal solutions.

3 - First-order Methods for Robust Convex Optimization

Nam Ho-nguyen, Carnegie Mellon University, 5000 Forbes

Avenue, Pittsburgh PA

,hnh@andrew.cmu.edu,

Fatma Kilinc Karzan

Robust optimization is a framework to model parameter uncertainty in

optimization problems. Inspired by recent developments, we present several

efficient first-order methods to approximately solve robust convex optimization

problems. We also introduce the notion of weighted regret online learning and

the online saddle-point problem, which form key building blocks for our

methods. Finally, we discuss some proximal-type algorithms for these problems.

SB16

16-Franklin 6, Marriott

Recent Advances in Linear and Conic Optimization

Sponsor: Optimization/Linear and Conic Optimization

Sponsored Session

Chair: Hande Benson, Associate Professor, Drexel University, LeBow

College of Business, Philadelphia, PA, 19104, United States of America,

hvb22@drexel.edu

1 - A Generalization of Chubanov’s Projection Algorithm

Negar Soheili, Assistant Professor Of Business Analytics,

University of Illinois at Chicago, 601 S Morgan Street, University

Hall 2416, Chicago, Il, 60607, United States of America,

nazad@uic.edu

The Chubanov’s algorithm is a projection algorithm for solving linear

homogeneous feasibility problems that leverages periodic rescaling. We propose

an extension of this algorithm for solving more general conic feasibility problems

where our iteration bound is based on a condition measure associated with the

geometry of the problem.

2 - Polynomial Time Methods for Optimal Design of Experiments for

Regression using Conic Optimization

David Papp, North Carolina State University,

3222 SAS Hall, Campus Box 8205, Raleigh, NC, 27695-8205,

United States of America,

dpapp@ncsu.edu

Computing optimal experimental designs for regression addresses a fundamental

problem regarding the collection of experimental data. Natural formulations of

this problem are either semi-infinite convex programs or finite dimensional

nonconvex programs. We shall present a novel approach that reduces the solution

of the nonconvex formulation to the solution of two coupled semidefinite

programs. This leads to the first polynomial time algorithm for the computation of

such global optimal designs.

3 - The Parametric Simplex Method and its Application in

Machine Learning

Haotian Pang, Princeton University, Department of Electrical

Engineering, Olden Street, Princeton, NJ, 08540,

United States of America,

hpang@princeton.edu,

Han Liu,

Robert Vanderbei, Tuo Zhao

We formulate a series of machine learning problems, including Dantzig selector

for sparse linear regression, CLIME for sparse precision matrix estimation and

LPD rule for sparse linear discriminant analysis, into the parametric linear

programming form and show that it is fast and efficient to solve it by the

parametric simplex method. For the Dantzig selector model, we show that each

dual iteration is guaranteed to be sparse as well as under the irrepresentable

conditions.

4 - Warmstarts for Mixed-integer Second-order Cone Programming

Hande Benson, Associate Professor, Drexel University,

LeBow College of Business, Philadelphia, PA, 19104,

United States of America,

hvb22@drexel.edu

In this talk, we will discuss warmstarting strategies after different types of cuts

within an algorithm for solving mixed-integer second-order cone programming

problems. The underlying SOCPs are solved using an interior-point method.

SB14