Informs Annual Meeting Phoenix 2018

INFORMS Phoenix – 2018

SC04

5 - A Model of Learning and Doing in Innovation Contests Sanjiv Erat, University of California-San Diego, Rady School of Mgmt., Otterson Hall, 9500 Gilman Drive MC 0553, La Jolla, CA, 92093-0553, United States, Lakshminarayana Nittala In the setting of an innovation contest, we conceptualize solvers who exert effort on two orthogonal dimension - (i) effort geared toward “exploration and learning more about the solution space, and (ii) effort geared toward “exploitation of the newly created knowledge and delivering a tangible and usable solution. Using this richer multidimensional conceptualization of “learning and doing in innovation contests, this talk shall discuss the levers available at the contest organizer firm to influence outcomes, and how the focal firm can optimally manage contests.

n SC05 North Bldg 122B Semidefinite Optimization II Sponsored: Optimization/Linear and Conic Optimization Sponsored Session Chair: Nathan Krislock, Northern Illinois University, DeKalb, IL, 60115-2888, United States 1 - Low-Rank Matrix Completion (LRMC) using Nuclear Norm (NN) with Facial Reduction (FR); Applications to Robust Principal Component Analysis Henry Wolkowicz, University of Waterloo, Combinatorics & Optimization, Waterloo, ON, N2L 3G1, Canada Minimization of NN is often used as a convex relaxation for solving LRMC problems. This can then be solved using semidefinite programming (SDP). The SDP and its dual are regular. FR has been successful in regularizing degenerate problems. Here we take advantage of the structure at optimality for the NN minimization and show that even though strict feasibility holds, the FR framework can be successfully applied to obtain a proper face that contains the optimal set and even improves on the NN relaxation. We include numerical tests for both exact and noisy cases. 2 - Finding the Relative Interior of Spectrahedra with Applications to Facial Reduction and Matrix Completion Problems Stefan Sremac, University of Waterloo, Waterloo, ON, Canada A spectrahedron is the intersection of an affine manifold with the cone of positive semidefinite matrices. In this talk we present an approach to finding a matrix in the relative interior of a spectrahedron. We construct a parametric curve terminating at a matrix in the relative interior of the spectrahedron and present sufficient conditions for this limit point to be the analytic centre. We implement a path-following algorithm and discuss numerical performance on test spectrahedra having various singularity degrees. Finally, we conclude the presentation by discussing implications of the approach to positive definite Toeplitz completion problems. 3 - BiqCrunch: Solving Binary Quadratic Problems Efficiently using Semidefinite Optimization Nathan Krislock, Northern Illinois University, Dept. of Mathematical Sciences, DeKalb, IL, 60115-2888, United States, Jérume Malick, Frederic Roupin BiqCrunch is a branch-and-bound solver using semidefinite optimization to compute high-quality bounds for binary quadratic problems, such as MaxCut, Max-k-Cluster, Maximum-Independent-Set, Exact Quadratic Knapsack, and the Quadratic Assignment Problem. BiqCrunch does not use an interior-point method for computing its bounds. Instead, an eigenvalue solver and a gradient-based method are used to compute tight bounds. We will discuss our bounding procedure and give an update on the new features and performance enhancements of the latest version of BiqCrunch. 4 - Solving the Semidefinite Programming (SDP) Relaxation in BiqCrunch using an Augmented Lagrangian Method Ahmed Al-Jilawi, Northern Illinois University, DeKalb, IL, United States We present our recent work in solving semidefinite relaxations of binary quadratic optimization problems in the BiqCrunch solver. The semidefinite relaxation gives us a high-quality bound on the optimal value of the binary quadratic optimization problem. We use an augmented Lagrangian method instead of the penalty method in BiqCrunch, giving us a bounding procedure that converges faster. n SC06 North Bldg 122C Computational Optimization Sponsored: Optimization/Computational Optimization and Software Sponsored Session Chair: Hande Benson, Drexel University, Philadelphia, PA, 19104, United States Co-Chair: Robert J. Vanderbei, Princeton University, Princeton University, Princeton, NJ, 08544, United States 1 - Augmented Lagrangian - Fast Projected Gradient Method for Convex Quadratic Problems Igor Griva, George Mason University, Fairfax, VA, United States We consider an augmented Lagrangian - fast projected gradient method for convex quadratic problems with linear constraints and simple bounds and discuss its convergence properties.

n SC04 North Bldg 122A

Joint Session Integer Programming/OR Frontiers: Machine Learning and Discrete Optimization II Sponsored: Optimization/Integer and Discrete Optimization Sponsored Session Chair: Thiago Serra, Carnegie Mellon University, Pittsburgh, PA, 15206, United States 1 - Mathematics of Neural Networks Anirbit Mukherjee In this talk I will give a brief overview of some of the mathematical questions about neural networks that me and Amitabh Basu (with other collaborators) have been exploring. We will start with the results in our ICLR 2018 paper about neurally representable functions (https://eccc.weizmann.ac.il/report/2017/098/). We will particularly focus on our recent works (1) (https://eccc.weizmann.ac.il/report/2017/190/) proving lower bounds on the size of neural circuits representing certain Boolean functions and (2) our results in the paper in ISIT 2018 (also NIPS 2017 workshop) trying to formalize the connection between autoencoders and sparse coding (https://arxiv.org/abs/1708.03735). 2 - Combinatorial Attacks Against Binarized Neural Networks Elias B. Khalil, Georgia Tech, Atlanta, GA, 30318, United States, Amrita Gupta, Bistra Dilkina Binarized Neural Networks (BNNs) with weights in {-1,1} and the sign function non-linearity have recently attracted attention due to their small computational overhead. Concurrently, it has been shown that neural nets may be overly sensitive to tiny adversarial changes in the input, which may be detrimental to their use in safety-critical domains. The non-differentiable nature of BNNs poses a challenge to gradient-based robust training methods. We show how to attack a trained BNN, a task that is crucial to robust learning, by solving a Mixed Integer Program, or with a structured decomposition approach. Our attacks are substantially more effective than standard gradient-based attacks on MNIST. 3 - A Temporal Architecture for Branch and Bound Giulia Zarpellon, Polytechnique Montréal, Montréal, QC, Canada, Jason Jo, Andrea Lodi, Yoshua Bengio The heuristic character of the Branch and Bound (B&B) framework for Integer Programming (IP) naturally provides an appealing ground for machine learning. At the same time, addressing one of the key heuristic decisions B&B relies on - namely, variable selection - poses many challenges for machine learning models. We propose a novel way to model the temporal complexity of B&B and the role of its diverse though interconnected components. We experiment on a baseline architecture, and highlight its flexibility to accommodate tools and insights from both IP and Deep Learning domains. 4 - Bounding and Counting Linear Regions of Deep Neural Networks Thiago Serra, Carnegie Mellon University, Pittsburgh, PA, United States, Christian Tjandraatmadja, Srikumar Ramalingam We study the number of linear regions that piecewise linear functions represented by neural networks can attain, both theoretically and empirically. We present (i) tighter bounds for the maximum number of linear regions on rectifier networks, which are exact for inputs of dimension one; (ii) a first upper bound for multi- layer maxout networks; and (iii) a first method to perform exact enumeration of the linear regions by modeling the DNN as a mixed-integer program. These bounds come from leveraging the dimension of the space defining each linear region and they indicate that, for a same number of units, the dimension of the input determines if shallow or deep networks have more linear regions.

62

Made with FlippingBook - Online magazine maker