Background Image
Previous Page  45 / 552 Next Page
Information
Show Menu
Previous Page 45 / 552 Next Page
Page Background

INFORMS Philadelphia – 2015

43

SA20

4 - Social Structure Optimization in Nurse Scheduling Problem

Alireza Farasat, Graduate Research Assistant, University at

Buffalo (SUNY), 327 Bell Hall, Department of Industrial and

Systems Eng, Amherst, NY, 14260, United States of America,

afarasat@buffalo.edu,

Alexander Nikolaev

This paper presents a mathematical framework for treating the Nurse Scheduling

Problem (NSP) explicitly incorporating Social Structure (NSP-SS). While

traditional approaches generate a configuration of individual schedules, the

presented framework introduces models that assign nurses to working shifts to

achieve an optimal structure of individual attributes and social relations within

the teams. For an NP-Hard instance of NSP-SS, an integer program is presented,

followed by a LK-NSP heuristic.

SA18

18-Franklin 8, Marriott

Recent Advances on Support Vector Machines

Research

Cluster: Modeling and Methodologies in Big Data

Invited Session

Chair: Shouyi Wang, Assistant Professor, University of Texas at

Arlington, 3105 Birch Ave, Grapevine, TX, 76051,

United States of America,

shouyiw@uta.edu

1 - Fast Scalable Support Vector Machines for Big Bimodical

Data Analytics

Talayeh Razzaghi, Postdoctoral Research Fellow, Clemson

University, 221 McAdams Hall, Clemson University, Clemson,

United States of America,

trazzag@clemson.edu

, Ilya Safro,

Mark Wess

Solving the optimization model of support vector machines is often an expensive

computational task for very large biomedical training sets. We propose an

efficient, effective, multilevel algorithmic framework that scales to very large data

sets. Our multilevel framework substantially improves the computational time

without loosing the quality of classifiers for balanced and imbalanced datasets.

2 - Value-at-Risk Support Vector Machine (Var-SVM ): MIP

Representation and Equivalence of Formulations

Victoria Zdanovskaya, Research And Teaching Assistant At

Industrial And Systems Engineering Department, University of

Florida, 303 Weil Hall, Gainesville, FL, 32611, United States of

America,

ladyvi@ufl.edu

, Konstantin Pavlikov

SVMs is a widely used data classification technique. A class of Var-SVMs is known

to be robust to the outliers in the training dataset. Unfortunately Var-SVM is a

nonconvex optimization problem. We consider MIP representations of Var-SVM,

that can be solved by standard Branch & Bound algorithm. We also consider

different techniques that help to dramatically improve computational

performance of such formulations.

3 - A Comparison of Constraint Relaxation and Bagging Policies in

Support Vector Classification

Petros Xanthopoulos, University of Central Florida, 12800

Pegasus Dr., Orlando, FL, 32816, United States of America,

petrosx@ucf.edu

, Onur Seref, Talayeh Razzaghi

In classification, when data are available in uneven proportions the problem

becomes imbalanced and the performance of standard methods deteriorates.

Imbalanced classification becomes a more challenging in the presence of outliers.

In this presentation, we study several algorithmic modifications of support vector

machines for such problems. We show that the combined used of cost sensitive

learning with constraint relaxation performs better compared to approaches that

involve bagging.

4 - Semi-supervised Proximal Support Vector Machine with Sparse

Representation Regularization

Jiaxing Pi, University of Florida, 3800 SW 34th St. Apt. P138,

Gainesville, FL, 32608, United States of America,

jiaxing@ufl.edu

,

Panos Pardalos

Proximal Support Vector Machine has been an efficient technique to generate

classifiers. Sparse representation can detect neighborhood for a signal by

reconstructing it with the linear span of other data. We applied sparse

representation to build a regularization which can achieved semi-supervised

assumptions for unlabeled data. Experiment on standard datasets are performed

to compare the proposed framework with PSVM with manifold regularization.

5 - Extending Relaxed Support Vector Machines

Orestis Panagopoulos, University of Central Florida, 12800

Pegasus Dr., Orlando, FL, 32816, United States of America,

opanagopoulos@knights.ucf.edu

, Onur Seref, Talayeh Razzaghi,

Petros Xanthopoulos

In this work, we propose Relaxed Support Vector Regression (RSVR) and One-

Class Relaxed Support Vector Machines (ORSVM). The methods constitute

extensions of Relaxed Support Vector Machines (RSVM). They are formulated

using both linear and quadratic loss functions and are solved with sequential

minimal optimization. Numerical experiments on public datasets and

computational comparisons with other popular classifiers depict the behavior of

our proposed methods.

SA19

19-Franklin 9, Marriott

High-performace Computation for Optimization

Sponsor: Computing Society

Sponsored Session

Chair: Suresh Bolusani, Lehigh University, 524 Montclair Avenue,

Bethlehem, United States of America,

sub214@lehigh.edu

1 - Distributed Integer Programming

Ezgi Karabulut, Georgia Institute of Technology, 755 Ferst Drive,

NW, Atlanta, GA, 30332-0205, United States of America,

ezgi.karabulut@gatech.edu

, George L. Nemhauser,

Shabbir Ahmed

We want to find distributed solution algorithms for integer programming

problems that allow only minimal interaction between the solvers.

2 - Scalable Communication in Parallel Optimizaiton

Oleg Shylo, University of Tennessee, 851 Neyland Drive, 523

John Tickle Building, Knoxville, TN, United States of America,

oshylo@utk.edu

We establish theoretical models of algorithm portfolios to optimize

communication patterns in algorithms, closely match empirical behavior of

communicative algorithm portfolios, and predict computational performance for

new and untested configurations.

3 - Solving Bilevel Linear Optimization Problems in Parallel

Suresh Bolusani, Lehigh University, 524 Montclair Avenue,

Bethlehem, PA, United States of America,

sub214@lehigh.edu,

Ted Ralphs

Many real world applications involve multiple, independent decision makers with

multiple, possibly conflicting objectives. Bilevel linear optimization provides a

framework for modeling of such problems. With the growing number of

applications, faster solution algorithms for bilevel optimization problems are

needed. In this work, we present a parallel approach to solving bilevel

optimization problems. Computational results will be presented.

SA20

20-Franklin 10, Marriott

Big Data in the Clouds

Cluster: Cloud Computing

Invited Session

Chair: Lydia Chen, IBM Zurich,

yic@zurich.ibm.com

1 - Declarative Cloud Performance Analytics

Boon Thau Loo, Associate Professor, University of Pennsylvania,

Philadelphia, PA, 19104, United States of America,

boonloo@cis.upenn.edu

This talk presents Scalanytics, a declarative platform that supports high-

performance cloud application performance monitoring. Scalanytics uses stateful

network packet processing techniques for extracting application-layer data from

network packets, a declarative rule-based language for compactly specifying

analysis pipelines, and a parallel architecture for processing network packets at

high throughput. I will next describe the commercialization of Scanalytics as

Gencore

(gencore.io

).