BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Centre for Mathematical Sciences - ECPv5.4.0.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Centre for Mathematical Sciences
X-ORIGINAL-URL:https://math-sciences.org
X-WR-CALDESC:Events for Centre for Mathematical Sciences
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20170101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20170531T140000
DTEND;TZID=UTC:20170531T150000
DTSTAMP:20230921T201533
CREATED:20170419T141616Z
LAST-MODIFIED:20170601T183320Z
UID:2459-1496239200-1496242800@math-sciences.org
SUMMARY:Peter Richtarik (Edinburgh)
DESCRIPTION:Joint Theoretical Physics and Applied Mathematics Seminar \nTitle: Stochastic reformulations of linear systems and efficient randomized algorithms \nAbstract: \nWe propose a new paradigm for solving linear systems with a very large number of equations. In our paradigm\, the system is first reformulated into a stochastic problem\, and then solved with a suitable (typically randomized) algorithm. Our stochastic reformulation is flexible as it depends on a user-defined parameter in the form of a distribution defining an ensemble of random matrices. The choice of the distribution directly influences the “condition number” of the reformulation\, which leads to the novel concept of “randomized reconditioning”. We give necessary and sufficient conditions for the reformulation to be exact\, i.e.\, for the solution set of the stochastic problem to be identical to the solution set of the linear system. We also show that the reformulation can be equivalently seen as a stochastic optimization problem\, stochastically preconditioned linear system\, stochastic fixed-point problem and as a probabilistic intersection problem. For instance\, the condition number of the reformulation is equal to the condition number of the stochastically preconditioned linear system\, and to the condition number of associated with the Hessian of the objective function appearing the stochastic optimization reformulation. Further\, we propose and analyze basic\, parallel and accelerated stochastic algorithms for solving the reformulated problem\, with linear convergence rates. The methods have natural and sometimes surprising interpretations from the viewpoint of each of the four reformulations. For instance\, the methods can be interpreted as basic\, parallel and accelerated variants of stochastic gradient descent\, stochastic Newton descent\, stochastic projection method and stochastic fixed-point method. The complexity of the basic variants scales linearly with the condition number of the reformulation\, while the accelerated variants scale with the square root of the condition number. Moreover\, all our methods lend themselves to a natural dual interpretation as “stochastic subspace ascent” methods\, a novel class of optimization algorithms not analyzed before. Stochastic dual coordinate ascent and stochastic dual Newton ascent arise in special cases. We prove global linear convergence of all our algorithms. Further\, we highlight a close connection to recent algorithmic developments in machine learning through casting the problem as an instance of the Empirical Risk Minimization problem in a new regime not studied before. \nThe above development can be extended to matrix inversion. In particular\, we develop and analyze a broad family of stochastic/randomized algorithms for inverting a matrix\, with specialized variants maintaining symmetry and/or positive definiteness of the iterates. All methods in the family converge globally and linearly\, with explicit rates. In special cases\, we obtain stochastic block variants of several quasi-Newton updates\, including bad Broyden (BB)\, good Broyden (GB)\, Powell-symmetric-Broyden (PSB)\, Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). Ours are the first stochastic versions of these updates shown to converge to an inverse of a fixed matrix. Through a dual viewpoint we uncover a fundamental link between quasi-Newton updates and approximate inverse preconditioning. Further\, we develop an adaptive variant of randomized block BFGS\, where we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. Further\, for rectangular and non-invertible matrices\, variants of our methods can be shown to converge to the Moore-Penrose pseudoinverse. \n \n
URL:https://math-sciences.org/event/peter-richtarik-edinburgh/
CATEGORIES:Applied Mathematics,Seminars,Theoretical Physics
END:VEVENT
END:VCALENDAR