Applied Math And Analysis Seminar
Friday, February 19, 2016, 12:00pm, Physics 130
Mert Gurbuzbalaban (MIT)
Analyzing Complex Systems and Networks: Incremental Optimization and Robustness
Abstract:
Many of the emergent technologies and systems including infrastructure systems (communication, transportation and energy systems) and decision networks (sensor and robotic networks) require rapid processing of large data and comprise dynamic interactions that necessitate robustness to small errors, disturbances or outliers.

Motivated by large-scale data processing in such systems, we first consider additive cost convex optimization problems (where each component function of the sum represents the loss associated to a data block) and propose and analyze novel incremental gradient algorithms which process component functions sequentially and one at a time, thus avoiding costly computation of a full gradient step. We focus on two randomized incremental methods, Stochastic Gradient Descent (SGD) and Random Reshuffling (RR), which have been the most widely used optimization methods in machine learning practice since the fifties. The only difference between these two methods is that RR samples the component functions without-replacement whereas SGD samples with-replacement. Much empirical evidence suggested that RR is faster than SGD, although no successful attempt has been made to explain and quantify this discrepancy for a long time. We provide the first theoretical convergence rate result of O(1/k^{2s}) for any s in (1/2,1) (and O(1/k^2) for a bias-removed novel variant) with probability one for RR showing its improvement over Ω(1/k) rate of SGD and highlighting the mechanism for this improvement. Our result relies on a detailed analysis of deterministic incremental methods and a careful study of random gradient errors. We then consider deterministic incremental gradient methods with memory and show that they can achieve a much-improved linear rate using a delayed dynamical system analysis.

In the second part, we focus on large-scale continuous-time and discrete-time linear dynamical systems that model various interactions over complex networks and systems. There are a number of different metrics that can be used to quantify the robustness of such dynamical systems with respect to input disturbance, noise or error. Some key metrics are the H-infinity norm and the stability radius of the transfer matrix associated to the system. Algorithms to compute these metrics exist, but they are impractical for large-scale complex networks or systems where the dimension is large and they do not exploit the sparsity patterns in the network structure. We develop and analyze the convergence of a novel scalable algorithm to approximate both of the metrics for large-scale sparse networks. We then illustrate the performance of our method on numerical examples and discuss applications to design optimal control policies for dynamics over complex networks and systems.

Generated at 3:51am Saturday, April 20, 2024 by Mcal.   Top * Reload * Login