Informs Annual Meeting Phoenix 2018

INFORMS Phoenix – 2018

SA03

3 - An MIP Approach to Finding Optimum Search Schemes for Approximate String Matching Haochen Luo, Texas A&M University, 3131 TAMU, College Station, TX, 77843-3131, United States, Kiavash Kianfar, Christopher Pockrandt, Bahman Torkamandi, Knut Reinert Finding approximate occurrences of a pattern in a text using a full-text index is a central problem in bioinformatics. Use of search schemes (partitioning the pattern and searching the pieces in certain orders with given bounds on errors) can yield significant speed-ups. However, finding optimal search schemes is a difficult combinatorial optimization problem. Here for the first time, we propose a mixed integer program (MIP) capable to solve this optimization problem for Hamming distance with given number of pieces. Our experiments show that the optimal search schemes found by our MIP significantly (up to 35 times) improve the performance of search in index upon previous ad-hoc solutions. Joint Session OPT/Practice Curated: Sparse Semidefinite Programs with Machine Learning Applications Sponsored: Optimization/Linear and Conic Optimization Sponsored Session Chair: Somayeh Sojoudi, University of California, Berkeley, Berkeley, CA, 94703, United States Co-Chair: Richard Zhang, UC Berkeley, Berkeley, CA, 94709, United States 1 - Empirical Bayes Estimation of Parametric Gaussian Priors Martin Skovgaard Andersen, Technical University of Denmark, Asmussens Allé, Kgs. Lyngby, 2800, Denmark In Gaussian linear models, a maximum a posteriori estimate of a set of unknown parameters depends on a prior distribution that is typically unknown. Assuming that this prior is Gaussian with a structured or sparse precision matrix, we investigate the use of empirical Bayes estimation to estimate the prior directly from data. We also propose a method to solve the resulting estimation problem, which is a nonlinear semidefinite optimization problem, and demonstrate the usefulness of the approach with some numerical examples. 2 - SDP Formulations for Fairness in Unsupervised Learning Mahbod Olfat, PhD Student, University of California-Berkeley, 1433 Dwight Way, Unit C, Berkeley, CA, 94702, United States, Anil Aswani Though there is a growing body of literature on fairness for supervised learning, the problem of incorporating fairness into unsupervised learning has been less well-studied. This paper studies fairness applied to PCA. We first present a definition of fairness for dimensionality reduction, which can be interpreted as saying that a reduction is fair if information about a protected class cannot be inferred from the dimensionality-reduced data points. Next, we develop convex optimization formulations that can improve the fairness (with respect to our definition) of PCA and kernel PCA. These formulations are SDP’s, and we demonstrate the effectiveness of our formulations using several datasets. 3 - Linear-time Algorithm for Learning Large-scale Sparse Graphical Models Richard Zhang, UC Berkeley, 621 Sutardja Dai Hall, University of California, Berkeley, CA, 94709, United States, Salar Fattahi, Somayeh Sojoudi The sparse inverse covariance estimation problem is commonly solved using an l1-regularized Gaussian maximum likelihood estimator known as “graphical lasso”. This talk describes a Newton-CG algorithm to efficiently solve graphical lasso for large datasets. Assuming that the thresholded sample covariance matrix is sparse with a sparse Cholesky factorization, we prove that the algorithm converges to an ?-accurate solution in O(n log(1/?)) time and O(n) memory. The algorithm is highly efficient in practice: we solve graphical lasso problems with as many as 200,000 variables to 7-9 digits of accuracy in less than an hour on a standard laptop computer running MATLAB. n SA05 North Bldg 122B

4 - Control-theoretic Analysis of Smoothness for Stability-certified Reinforcement Learning Ming Jin, UC Berkeley, Cory 406, Berkeley, CA, 94720, United States It is critical to obtain stability certificate before deploying RL in real-world mission-critical systems. This work justifies smoothness as an important property for stability-certified RL from a control-theoretic perspective. The smoothness margin can be obtained by solving a feasibility problem based on SDP for both linear and nonlinear dynamical systems, and it does not need to access the exact parameters of the learned controllers. Numerical evaluation on nonlinear and decentralized frequency control for large-scale power grids is demonstrated using (deep) neural-network policies. The study opens up new opportunities for robust Lipschitz continuous policy learning. n SA06 North Bldg 122C Grey Box Optimization Sponsored: Optimization/Computational Optimization and Software Sponsored Session Chair: Fani Boukouvala, Georgia Institute of Technology, Atlanta, GA, 30312, United States 1 - RBF Surrogate Global Optimization in Serial and Parallel that Dramatically Outperform Gaussian Process (ego) Methods in Dimensions Over 9 Christine A. Shoemaker, Distinguished Professor, National University of Singapore, 107 Clementi Road, Block F. 09-01, Singapore, 129790, Singapore We show comparisons between Gaussian Process (GP) and Radial Basis Functions ( RBF) based surrogate global optimization on a large number of problems (including deep learning). The comparisons have multiple versions of GP-EI ( EGO ). The RBF methods include the DYCORS-MRS acquisition function. With a fixed number of objective function evaluations, multiple trials, and a large number of problems, the GP-EI solutions are much worse than RBF-DYCORS on average for dimensions 10, 20, 30, and 40. The reasons for those differences are quantitatively assessed and explained. We also show RBF parallel results and beneficial impact of dynamic replacement of global surrogate by multiple local RBF surrogates 2 - Integrating Sensitivity Analysis in Computationally Expensive Black-box Optimization Computationally expensive simulations are used in many fields of science to study complex natural phenomena. These simulations often contain a large number of parameters that must be optimized efficiently because the simulation’s computational expense limits the number of parameter combinations that can be tried. The effect of many of these parameters on the simulation’s output may not be well understood. In this talk, we present a method in which we integrate sensitivity analysis (SA) in the iterative sampling procedure of an adaptive surrogate model optimization algorithm. We compare the approach with a method that does not use the SA information and a two-stage approach. 3 - Combine Surrogate Based Global Search and Trust Region Local Search for Improved Global Optimization Limeng Liu, PhD Student, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore, 138600, Singapore, Christine A. Shoemaker We design a new surrogate based algorithm for black-box continuous function optimization. Since Local minima in the global surrogate are promising regions for the global optimum, our algorithm intelligently selects among local minima in the surrogate as starting points for derivative free trust region based local search(ORBIT). Results show that our algorithms have good performance with high accuracy especially on high dimensional problems. There is a convergence proof for our algorithm. In addition to high accuracy, this framework has an advantage of easy parallelization. 4 - Spatial Branch-and-Bound Data-Driven Optimization Using Surrogate Approximations Jianyuan Zhai, Georgia Tech University, Atlanta, GA, United States Abstract not available. Juliane Mueller, Lawrence Berkeley National Lab, 2453 Bonar Street, Berkeley, CA, 94702, United States

3

Made with FlippingBook - Online magazine maker