Table of Contents Table of Contents
Previous Page  126 / 561 Next Page
Information
Show Menu
Previous Page 126 / 561 Next Page
Page Background

INFORMS Nashville – 2016

126

4 - New Formulation-based Methods In Distance Geometry

Leo Liberti, CNRS, Palaiseau, 91128, France,

liberti@lix.polytechnique.fr

The fundamental problem of distance geometry asks to find a realization of a

finite, but partially specified, metric space in a Euclidean space of given

dimension. Unless some structure is known about the structure of the instance, it

is notoriously difficult to solve these problems computationally, and most

methods will either not scale up to useful sizes, or will be unlikely to identify

good solutions. We propose a new heuristic algorithm based on a semidefinite

programming formulation, a diagonally-dominant inner approximation of

Ahmadi and Hall’s, a randomized-type rank reduction method of Barvinok’s, and

a call to a local nonlinear programming solver.

MA14

104D-MCC

Data Analytics in Revenue Management Applications

Sponsored: Analytics

Sponsored Session

Chair: Jian Wang, The RainMaker Group, 4550 North Point Parkway,

Alpharetta, GA, 30022, United States,

jwang@LetItRain.com

1 - Discrete Choice Models In Airline Revenue Management

Applications: Challenges And Opportunities.

Emmanuel Carrier, Delta Airlines,

emmanuel.carrier@delta.com

Laurie A. Garrow

Although they are appealing as they reproduce the passenger’s decision process,

discrete choice models have failed to gain traction in airline RM applications. We

examine the reasons for this lack of success despite significant academic research

and successful applications in other industries. However, several trends are

converging to make choice models more attractive. As RM techniques evolve in

response to emerging data sources such as competitor fare data, the development

of ancillary products and shifts to the distribution landscape, we discuss the

relevance of choice models for new airline RM applications from dynamic pricing

to ancillary products.

2 - New Developments In Airline Capacity Sharing Model With

Movable Curtain

Ang Li, PROS, Inc., Houston, TX, United States,

ali@pros.com

,

Darius Walczak

We present our new results on the single-leg capacity sharing model with a

moveable curtain. In particular, these include structural properties of the model,

and an algorithm rendered by these to solve the associated dynamic program. We

also show a more general problem formulation where the curtain is only allowed

at a few positions on the aircraft.

3 - Forecasting Hotel Room Sales By Utilizing Online User Reviews

Jian Wang, VP, R&D, The RainMaker Group,

4550 North Point Parkway, Alphretta, GA, 30022, United States,

jwang@LetItRain.com

, Sneha Bishnoi, Han Wu

In traditional hotel revenue management (RM), room sales are forecasted mostly

based on transaction data. With the availability of “big data” such as online social

reputation, hotel RM practitioners become more interested in improving room

sales forecasting by utilizing these external data. In this talk, we present some

empirical results of forecasting hotel room sales by using online reviews and

scores of TrustYou and TripAdvisor along with transaction data. We then compare

the results with some traditional forecasting models that use transaction data

only.

MA15

104E-MCC

Recent Advances in Optimization for Structured

High-Dimension Problems

Invited: Modeling and Methodologies in Big Data

Invited Session

Chair: Mingyi Hong, Iowa State University, 118 Pearson Hall, Ames, IA,

50021, United States,

mingyi@iastate.edu

1 - Improved Sampling Complexity For Non-convex Optimization

And Learning

Guanghui Lan, Georgia Institution of Technology, Atlanta, GA,

United States,

george.lan@isye.gatech.edu

We present an improved complexity result on the number of samples required to

find an approximate global solution to a general nonconvex stochastic

optimization problem. We discuss the consequence of this result on machine

learning based on nonconvex optimization models, e.g., for some of those arising

from deep learning.

2 - Smart: The Stochastic Monotone Aggregated

Root-finding Algorithm

Damek Davis, UCLA,

damek@math.ucla.edu

We introduce the Stochastic Monotone Aggregated Root-Finding (SMART)

algorithm, a new randomized operator-splitting scheme for finding roots of finite

sums of operators. SMART is similar to the class of incremental aggregated

gradient algorithms, which minimize finite sums of functions; the difference is

that we replace gradients of functions with objects called operators. By replacing

gradients with operators, we increase our modeling power, and we simplify the

application and analysis of the resulting algorithms. Among other features,

SMART incorporates block-coordinate and asynchronous parallel updates.

3 - A First Order Free Lunch For The Sort-Lasso Optimization:

Linear Convergence Without A Price

Tuo Zhao, Johns Hopkins University,

tour@cs.jhu.edu

Many machine learning techniques sacrifice convenient computational structures

to gain estimation robustness and modeling flexibility. Here we study this

fundamental tradeoff through SQRT-Lasso for sparse linear regression. We explain

how novel optimization techniques address these computational challenges.

Particularly, we propose a pathwise smoothing proximal algorithm for solving the

SQRT-Lasso optimization problem. Theoretically, we provide a novel model-based

perspective for analyzing the smoothing optimization framework, which allows us

to establish R-linear convergence guarantee for our algorithm. This implies that

solving SQRT-Lasso is almost as easy as solving Lasso.

4 - Fast Stochastic Methods For Nonconvex Optimization

Sashank Reddi, CMU,

sjakkamr@cs.cmu.edu

We study stochastic methods for nonconvex finite-sum problems. Stochastic

gradient descent (SGD) is the de-facto algorithm used for solving optimization

problems of this form. However, SGD suffers from slow convergence due to the

inherent variance in the stochastic gradients. To tackle this issue, we develop fast

stochastic algorithms for the nonconvex finite-sum problem and show that they

are provably faster than both SGD and batch gradient descent. We also analyze a

subclass of nonconvex problems on which these methods attain linear

convergence to the global optimum. Our methods are based on variance

reduction techniques, that have recently surged into prominence for convex

optimization.

MA16

105A-MCC

Data Driven Optimization

General Session

Chair: Andrew Lim, National University of Singapore, 15 Kent Ridge

Drive, Singapore, 119245, Singapore,

andrewlim@nus.edu.sg

Co-Chair: Michael Jong Kim, University of Toronto, 5 King’s College

Road, Toronto, M5S3G8, Canada,

mikekim@mie.utoronto.ca

1 - Solving The Dual Problems Of Dynamic Programs Via Regression

Enlu Zhou, Georgia Institute of Technology,

enlu.zhou@isye.gatech.edu,

Helin Zhu, Fan Ye

We develop a framework of regression approach to approximating the optimal

dual penalties for the dual problems of general dynamic programs, by exploring

the structure of the function space that consists of all the feasible dual penalties.

The resulted approximations maintain to be feasible dual penalties, and thus yield

valid dual bounds on the optimal value function. We further show that the

proposed framework is computationally efficient, and the resulted dual penalties

lead to numerically tractable dual problems.

2 - The Coherent Loss Function For Classification

Huan Xu, National University of Singapore, Singapore,

isexuh@nus.edu.sg

, Wenzhuo Yang, Melvyn Sim

Binary classification involves minimizing 0-1 loss, which is intractable. To address

this, previous methods consider minimizing the *cumulative loss* - the sum of

convex surrogates of the 0-1 loss of each sample. We revisit this paradigm and

develop instead an *axiomatic* framework by proposing a set of salient properties

on functions for binary classification and then propose the *coherent loss*

approach, which is a tractable upper-bound of the empirical classification error

over the *entire* sample set. We show that the proposed approach yields a strictly

tighter approximation, while preserving the convexity of the underlying

optimization problem, and links this approach with robust SVM.

3 - Inverse Optimization Of Convex Risk Functions

Jonathan Yu-Meng Li, University of Ottawa, Ottawa, ON, Canada,

Jonathan.Li@telfer.uottawa.ca

The theory of convex risk functions has now been well established as the basis for

identifying the family of risk functions that should be used in risk-averse

optimization problems. Despite its theoretical appeal, the implementation of a

convex risk function remains difficult, as there is little guidance regarding how a

convex risk function should be chosen so that it also well represents one’s own

risk preferences. In this work, we present an inverse optimization framework that

imputes a convex risk function based on solution data from some risk-averse

optimization problems. Unlike classical inverse optimization, no parametric

assumption is made about the risk function.

MA14