Joint CIAM and Optimization and Systems Theory Seminar
Friday, November 17, 2006, 11.00-12.00, Room 3721, Lindstedtsvägen 25


Stephen P. Boyd
Stanford University, USA
E-mail: boyd@stanford.edu

Convex Optimization of Graph Laplacian Eigenvalues

Joint work with Persi Diaconis, Arpita Ghosh, Seung-jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin Xiao

We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the eigenvalues of the associated Laplacian matrix, subject to some constraints on the weights, such as nonnegativity, or a given total value. In many interesting cases this problem is convex, i.e., it involves minimizing a convex function (or maximizing a concave function) over a convex set. This allows us to give simple necessary and sufficient optimality conditions, derive interesting dual problems, find analytical solutions in some cases, and efficiently compute numerical solutions in all cases.

In this overview we briefly describe some more specific cases of this general problem which have been addressed in a series of recent papers.
°   Fastest mixing Markow chain. Find edge transition probabilities that give the fastest mixing (symmetric, discrete-time) Markov chain on the graph.
°   Fastest mixing Markow process. Find the edge transition rates that give the fastest mixing (symmetric, continuous-time) Markov process on the graph. °   Absolute algebraic connectivity. Find edge weights that maximize the algebraic connectivity of the graph (i.e., the smallest positive eigenvalue of its Laplacian matrix). The optimal value is called the absolute algebraic connectivity by Fielder.
°   Minimum total effective resistance. Find edge weights that minimize the total effective resistance of the graph. This is same as minimizing the average commute time from any node to any other, in the associated Markov chain.
°   Fastest linear averaging. Find weights in a distributed averaging network that yield fastest convergence.
°   Least steady-state mean-square deviation. Find weights in a distributed averaging network, driven by random noise, that minimizes the steady-state mean-square deviation of the node values.


Calendar of seminars
Last update: October 27, 2006 by Marie Lundin.