Communication-efficient Algorithms for Distributed Optimization

Download Communication-efficient Algorithms for Distributed Optimization PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (144 download)

DOWNLOAD NOW!


Book Synopsis Communication-efficient Algorithms for Distributed Optimization by : João F. C. Mota

Download or read book Communication-efficient Algorithms for Distributed Optimization written by João F. C. Mota and published by . This book was released on 2013 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Distributed Optimization: Advances in Theories, Methods, and Applications

Download Distributed Optimization: Advances in Theories, Methods, and Applications PDF Online Free

Author :
Publisher : Springer Nature
ISBN 13 : 9811561095
Total Pages : 243 pages
Book Rating : 4.8/5 (115 download)

DOWNLOAD NOW!


Book Synopsis Distributed Optimization: Advances in Theories, Methods, and Applications by : Huaqing Li

Download or read book Distributed Optimization: Advances in Theories, Methods, and Applications written by Huaqing Li and published by Springer Nature. This book was released on 2020-08-04 with total page 243 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book offers a valuable reference guide for researchers in distributed optimization and for senior undergraduate and graduate students alike. Focusing on the natures and functions of agents, communication networks and algorithms in the context of distributed optimization for networked control systems, this book introduces readers to the background of distributed optimization; recent developments in distributed algorithms for various types of underlying communication networks; the implementation of computation-efficient and communication-efficient strategies in the execution of distributed algorithms; and the frameworks of convergence analysis and performance evaluation. On this basis, the book then thoroughly studies 1) distributed constrained optimization and the random sleep scheme, from an agent perspective; 2) asynchronous broadcast-based algorithms, event-triggered communication, quantized communication, unbalanced directed networks, and time-varying networks, from a communication network perspective; and 3) accelerated algorithms and stochastic gradient algorithms, from an algorithm perspective. Finally, the applications of distributed optimization in large-scale statistical learning, wireless sensor networks, and for optimal energy management in smart grids are discussed.

Communication-Efficient and Private Distributed Learning

Download Communication-Efficient and Private Distributed Learning PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (141 download)

DOWNLOAD NOW!


Book Synopsis Communication-Efficient and Private Distributed Learning by : Antonious Mamdouh Girgis Bebawy

Download or read book Communication-Efficient and Private Distributed Learning written by Antonious Mamdouh Girgis Bebawy and published by . This book was released on 2023 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: We are currently facing a rapid growth of data contents originating from edge devices. These data resources offer significant potential for learning and extracting complex patterns in a range of distributed learning applications, such as healthcare, recommendation systems, and financial markets. However, the collection and processing of such extensive datasets through centralized learning procedures imposes potential challenges. As a result, there is a need for the development of distributed learning algorithms. Furthermore, This raises two principal challenges within the realm of distributed learning. The first challenge is to provide privacy guarantees for clients' data, as it may contain sensitive information that can be potentially mishandled. The second challenge involves addressing communication constraints, particularly in cases where clients are connected to a coordinator through wireless/band-limited networks. In this thesis, our objective is to develop fundamental information-theoretic bounds and devise distributed learning algorithms with privacy and communication requirements while maintaining the overall utility performance. We consider three different adversary models for differential privacy: (1) central model, where the exists a trusted server applies a private mechanism after collecting the raw data; (2) local model, where each client randomizes her own data before making it public; (3) shuffled model, where there exists a trusted shuffler that randomly permutes the randomized data before publishing them. The contributions of this thesis can be summarized as follows \begin{itemize} \item We propose communication-efficient algorithms for estimating the mean of bounded $\ell_p$-norm vectors under privacy constraints in the local and shuffled models for $p\in[1,\infty]$. We also provide information-theoretic lower bounds showing that our algorithms have order-optimal privacy-communication-performance trade-offs. In addition, we present a generic algorithm for distributed mean estimation under user-level privacy constraints when each client has more than one data point. \item We propose a distributed optimization algorithm to solve the empirical risk minimization(ERM) problem with communication and privacy guarantees and analyze its communication-privacy-convergence trade-offs. We extend our distributed algorithm for a client-self-sampling scheme that fits federated learning frameworks, where each client independently decides to contribute at each round based on tossing a biased coin. We also propose a user-level private algorithm for personalized federated learning. \item We characterize the r\'enyi differential privacy (RDP) of the shuffled model by proposing closed-form upper and lower bounds for general local randomized mechanisms. RDP is a useful privacy notion that enables a much tighter composition for interactive mechanisms. Furthermore, we characterize the RDP of the subsampled shuffled model that combines privacy amplification via shuffling and amplification by subsampling. \item We propose differentially private algorithms for the problem of stochastic linear bandits in the central, local, and shuffled models. Our algorithms achieve almost the same regret as the optimal non-private algorithms in the central and shuffled models, which means we get privacy for free. \item We study successive refinement of privacy by providing hierarchical access to the raw data with differentprivacy levels. We provide (order-wise) tight characterizations of privacy-utility-randomness trade-offs in several cases of discrete distribution estimation. \end{itemize}

Distributed Machine Learning with Communication Constraints

Download Distributed Machine Learning with Communication Constraints PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 250 pages
Book Rating : 4.:/5 (957 download)

DOWNLOAD NOW!


Book Synopsis Distributed Machine Learning with Communication Constraints by : Yuchen Zhang

Download or read book Distributed Machine Learning with Communication Constraints written by Yuchen Zhang and published by . This book was released on 2016 with total page 250 pages. Available in PDF, EPUB and Kindle. Book excerpt: Distributed machine learning bridges the traditional fields of distributed systems and machine learning, nurturing a rich family of research problems. Classical machine learning algorithms process the data by a single-thread procedure, but as the scale of the dataset and the complexity of the models grow rapidly, it becomes prohibitively slow to process on a single machine. The usage of distributed computing involves several fundamental trade-offs. On one hand, the computation time is reduced by allocating the data to multiple computing nodes. But since the algorithm is parallelized, there are compromises in terms of accuracy and communication cost. Such trade-offs puts our interests in the intersection of multiple areas, including statistical theory, communication complexity theory, information theory and optimization theory. In this thesis, we explore theoretical foundations of distributed machine learning under communication constraints. We study the trade-offs between communication and computation, as well as the trade-off between communication and learning accuracy. In particular settings, we are able to design algorithms that don't compromise on either side. We also establish fundamental limits that apply to all distributed algorithms. In more detail, this thesis makes the following contributions: * We propose communication-efficient algorithms for statistical optimization. These algorithms achieve the best possible statistical accuracy and suffer the least possible computation overhead. * We extend the same algorithmic idea to non-parametric regression, proposing an algorithm which also guarantees the optimal statistical rate and superlinearly reduces the computation time. * In the general setting of regularized empirical risk minimization, we propose a distributed optimization algorithm whose communication cost is independent of the data size, and is only weakly dependent on the number of machines. * We establish lower bounds on the communication complexity of statistical estimation and linear algebraic operations. These lower bounds characterize the fundamental limits of any distributed algorithm. * We design and implement a general framework for parallelizing sequential algorithms. The framework consists of a programming interface and an execution engine. The programming interface allows machine learning experts to implement the algorithm without concerning any detail about the distributed system. The execution engine automatically parallelizes the algorithm in a communication-efficient manner.

Communication-efficient and Fault-tolerant Algorithms for Distributed Machine Learning

Download Communication-efficient and Fault-tolerant Algorithms for Distributed Machine Learning PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (125 download)

DOWNLOAD NOW!


Book Synopsis Communication-efficient and Fault-tolerant Algorithms for Distributed Machine Learning by : Farzin Haddadpour

Download or read book Communication-efficient and Fault-tolerant Algorithms for Distributed Machine Learning written by Farzin Haddadpour and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Distributed computing over multiple nodes has been emerging in practical systems. Comparing to the classical single node computation, distributed computing offers higher computing speeds over large data. However, the computation delay of the overall distributed system is controlled by its slower nodes, i.e., straggler nodes. Furthermore, if we want to run iterative algorithms such as gradient descent based algorithms communication cost becomes a bottleneck. Therefore, it is important to design coded strategies while they are prone to these straggler nodes, at the same time they are communication-efficient. Recent work has developed coding theoretic approaches to add redundancy to distributed matrix-vector multiplications with the goal of speeding up the computation by mitigating the straggler effect in distributed computing. First, we consider the case where the matrix comes from a small (e.g., binary) alphabet, where a variant of a popular method called the ``Four-Russians method'' is known to have significantly lower computational complexity as compared with the usual matrix-vector multiplication algorithm. We develop novel code constructions that are applicable to binary matrix-vector multiplication {via a variant of the Four-Russians method called the Mailman algorithm}. Specifically, in our constructions, the encoded matrices have a low alphabet that ensures lower computational complexity, as well as good straggler tolerance. We also present a trade-off between the communication and computation cost of distributed coded matrix-vector multiplication {for general, possibly non-binary, matrices.} Second, we provide novel coded computation strategies, called MatDot, for distributed matrix-matrix products that outperform the recent ``Polynomial code'' constructions in recovery threshold, i.e., the required number of successful workers at the cost of higher computation cost per worker and higher communication cost from each worker to the fusion node. We also demonstrate a novel coding technique for multiplying $n$ matrices ($n \geq 3$) using ideas from MatDot codes. Third, we introduce the idea of \emph{cross-iteration coded computing}, an approach to reducing communication costs for a large class of distributed iterative algorithms involving linear operations, including gradient descent and accelerated gradient descent for quadratic loss functions. The state-of-the-art approach for these iterative algorithms involves performing one iteration of the algorithm per round of communication among the nodes. In contrast, our approach performs multiple iterations of the underlying algorithm in a single round of communication by incorporating some redundancy storage and computation. Our algorithm works in the master-worker setting with the workers storing carefully constructed linear transformations of input matrices and using these matrices in an iterative algorithm, with the master node inverting the effect of these linear transformations. In addition to reduced communication costs, a trivial generalization of our algorithm also includes resilience to stragglers and failures as well as Byzantine worker nodes. We also show a special case of our algorithm that trades-off between communication and computation. The degree of redundancy of our algorithm can be tuned based on the amount of communication and straggler resilience required. Moreover, we also describe a variant of our algorithm that can flexibly recover the results based on the degree of straggling in the worker nodes. The variant allows for the performance to degrade gracefully as the number of successful (non-straggling) workers is lowered. Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms to train large neural networks. In recent years, there has been a great deal of research to alleviate communication cost by compressing the gradient vector or using local updates and periodic model averaging. Next direction in this thesis, is to advocate the use of redundancy towards communication-efficient distributed stochastic algorithms for non-convex optimization. In particular, we, both theoretically and practically, show that by properly infusing redundancy to the training data with model averaging, it is possible to significantly reduce the number of communication rounds. To be more precise, we show that redundancy reduces residual error in local averaging, thereby reaching the same level of accuracy with fewer rounds of communication as compared with previous algorithms. Empirical studies on CIFAR10, CIFAR100 and ImageNet datasets in a distributed environment complement our theoretical results; they show that our algorithms have additional beneficial aspects including tolerance to failures, as well as greater gradient diversity. Next, we study local distributed SGD, where data is partitioned among computation nodes, and the computation nodes perform local updates with periodically exchanging the model among the workers to perform averaging. While local SGD is empirically shown to provide promising results, a theoretical understanding of its performance remains open. We strengthen convergence analysis for local SGD, and show that local SGD can be far less expensive and applied far more generally than current theory suggests. Specifically, we show that for loss functions that satisfy the \pl~condition, $O((pT)^{1/3})$ rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker. This is in contrast with previous work which required higher number of communication rounds, as well as was limited to strongly convex loss functions, for a similar asymptotic performance. We also develop an adaptive synchronization scheme that provides a general condition for linear speed up. We also validate the theory with experimental results, running over AWS EC2 clouds and an internal GPU cluster. In final section, we focus on Federated learning where communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions. Two notable trends to deal with the communication overhead of federated algorithms are \emph{gradient compression} and \emph{local computation with periodic communication}. Despite many attempts, characterizing the relationship between these two approaches has proven elusive. We address this by proposing a set of algorithms with periodical compressed (quantized or sparsified) communication and analyze their convergence properties in both homogeneous and heterogeneous local data distributions settings. For the homogeneous setting, our analysis improves existing bounds by providing tighter convergence rates for both \emph{strongly convex} and \emph{non-convex} objective functions. To mitigate data heterogeneity, we introduce a \emph{local gradient tracking} scheme and obtain sharp convergence rates that match the best-known communication complexities without compression for convex, strongly convex, and nonconvex settings. We complement our theoretical results by demonstrating the effectiveness of our proposed methods on real-world datasets.

Distributed Optimization in Networked Systems

Download Distributed Optimization in Networked Systems PDF Online Free

Author :
Publisher : Springer Nature
ISBN 13 : 9811985596
Total Pages : 282 pages
Book Rating : 4.8/5 (119 download)

DOWNLOAD NOW!


Book Synopsis Distributed Optimization in Networked Systems by : Qingguo Lü

Download or read book Distributed Optimization in Networked Systems written by Qingguo Lü and published by Springer Nature. This book was released on 2023-02-08 with total page 282 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book focuses on improving the performance (convergence rate, communication efficiency, computational efficiency, etc.) of algorithms in the context of distributed optimization in networked systems and their successful application to real-world applications (smart grids and online learning). Readers may be particularly interested in the sections on consensus protocols, optimization skills, accelerated mechanisms, event-triggered strategies, variance-reduction communication techniques, etc., in connection with distributed optimization in various networked systems. This book offers a valuable reference guide for researchers in distributed optimization and for senior undergraduate and graduate students alike.

Optimization Algorithms for Distributed Machine Learning

Download Optimization Algorithms for Distributed Machine Learning PDF Online Free

Author :
Publisher : Springer Nature
ISBN 13 : 303119067X
Total Pages : 137 pages
Book Rating : 4.0/5 (311 download)

DOWNLOAD NOW!


Book Synopsis Optimization Algorithms for Distributed Machine Learning by : Gauri Joshi

Download or read book Optimization Algorithms for Distributed Machine Learning written by Gauri Joshi and published by Springer Nature. This book was released on 2022-11-25 with total page 137 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.

Communication Efficient Distributed Optimization

Download Communication Efficient Distributed Optimization PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 39 pages
Book Rating : 4.:/5 (936 download)

DOWNLOAD NOW!


Book Synopsis Communication Efficient Distributed Optimization by : Lin Jin

Download or read book Communication Efficient Distributed Optimization written by Lin Jin and published by . This book was released on 2015 with total page 39 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Efficient Algorithms for Distributed Learning, Optimization and Belief Systems Over Networks

Download Efficient Algorithms for Distributed Learning, Optimization and Belief Systems Over Networks PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (18 download)

DOWNLOAD NOW!


Book Synopsis Efficient Algorithms for Distributed Learning, Optimization and Belief Systems Over Networks by : Cesar Augusto Uribe Meneses

Download or read book Efficient Algorithms for Distributed Learning, Optimization and Belief Systems Over Networks written by Cesar Augusto Uribe Meneses and published by . This book was released on 2018 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Communication-Efficient Algorithms for Statistical Optimization

Download Communication-Efficient Algorithms for Statistical Optimization PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 53 pages
Book Rating : 4.:/5 (894 download)

DOWNLOAD NOW!


Book Synopsis Communication-Efficient Algorithms for Statistical Optimization by : Yuchen Zhang

Download or read book Communication-Efficient Algorithms for Statistical Optimization written by Yuchen Zhang and published by . This book was released on 2013 with total page 53 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Communication-efficient Algorithms for Tracking Distributed Data Streams

Download Communication-efficient Algorithms for Tracking Distributed Data Streams PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 62 pages
Book Rating : 4.:/5 (73 download)

DOWNLOAD NOW!


Book Synopsis Communication-efficient Algorithms for Tracking Distributed Data Streams by : Qin Zhang

Download or read book Communication-efficient Algorithms for Tracking Distributed Data Streams written by Qin Zhang and published by . This book was released on 2010 with total page 62 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Distributed Optimization and Learning

Download Distributed Optimization and Learning PDF Online Free

Author :
Publisher : Elsevier
ISBN 13 : 0443216371
Total Pages : 288 pages
Book Rating : 4.4/5 (432 download)

DOWNLOAD NOW!


Book Synopsis Distributed Optimization and Learning by : Zhongguo Li

Download or read book Distributed Optimization and Learning written by Zhongguo Li and published by Elsevier. This book was released on 2024-08-06 with total page 288 pages. Available in PDF, EPUB and Kindle. Book excerpt: Distributed Optimization and Learning: A Control-Theoretic Perspective illustrates the underlying principles of distributed optimization and learning. The book presents a systematic and self-contained description of distributed optimization and learning algorithms from a control-theoretic perspective. It focuses on exploring control-theoretic approaches and how those approaches can be utilized to solve distributed optimization and learning problems over network-connected, multi-agent systems. As there are strong links between optimization and learning, this book provides a unified platform for understanding distributed optimization and learning algorithms for different purposes. Provides a series of the latest results, including but not limited to, distributed cooperative and competitive optimization, machine learning, and optimal resource allocation Presents the most recent advances in theory and applications of distributed optimization and machine learning, including insightful connections to traditional control techniques Offers numerical and simulation results in each chapter in order to reflect engineering practice and demonstrate the main focus of developed analysis and synthesis approaches

First-order and Stochastic Optimization Methods for Machine Learning

Download First-order and Stochastic Optimization Methods for Machine Learning PDF Online Free

Author :
Publisher : Springer Nature
ISBN 13 : 3030395685
Total Pages : 591 pages
Book Rating : 4.0/5 (33 download)

DOWNLOAD NOW!


Book Synopsis First-order and Stochastic Optimization Methods for Machine Learning by : Guanghui Lan

Download or read book First-order and Stochastic Optimization Methods for Machine Learning written by Guanghui Lan and published by Springer Nature. This book was released on 2020-05-15 with total page 591 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.

First-Order Algorithms for Communication Efficient Distributed Learning

Download First-Order Algorithms for Communication Efficient Distributed Learning PDF Online Free

Author :
Publisher :
ISBN 13 : 9789180401340
Total Pages : pages
Book Rating : 4.4/5 (13 download)

DOWNLOAD NOW!


Book Synopsis First-Order Algorithms for Communication Efficient Distributed Learning by : Sarit Khirirat

Download or read book First-Order Algorithms for Communication Efficient Distributed Learning written by Sarit Khirirat and published by . This book was released on 2022 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Efficient Distributed Algorithms

Download Efficient Distributed Algorithms PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.8/5 (417 download)

DOWNLOAD NOW!


Book Synopsis Efficient Distributed Algorithms by : Yao Li

Download or read book Efficient Distributed Algorithms written by Yao Li and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Large-scale machine learning models are often trained by distributed algorithms over either centralized or decentralized networks. The former uses a central server to aggregate the information of local computing agents and broadcast the averaged parameters in a master-slave architecture. The latter considers a connected network formed by all agents. The information can only be exchanged with accessible neighbors with a mixing matrix of communication weights encoding the network's topology. Compared with centralized optimization, decentralization facilitates data privacy and reduces the communication burden of the single central agent due to model synchronization, but the connectivity of the communication network weakens the theoretical convergence complexity of the decentralized algorithms. Therefore, there are still gaps between decentralized and centralized algorithms in terms of convergence conditions and rates. In the first part of this dissertation, we consider two decentralized algorithms: EXTRA and NIDS, which both converge linearly with strongly convex objective functions and answer two questions regarding them. \extit{What are the optimal upper bounds for their stepsizes?} \extit{Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones?} More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.As the number of computing agents and the dimension of the model increase, the communication cost of parameter synchronization becomes the major obstacle to efficient learning. Communication compression techniques have exhibited great potential as an antidote to accelerate distributed machine learning by mitigating the communication bottleneck. In the rest of the dissertation, we propose compressed residual communication frameworks for both centralized and decentralized optimization and design different algorithms to achieve efficient communication. For centralized optimization, we propose DORE, a modified parallel stochastic gradient descent method with a bidirectional residual compression, to reduce over $95\\%$ of the overall communication. Our theoretical analysis demonstrates that the proposed strategy has superior convergence properties for both strongly convex and nonconvex objective functions. Existing works mainly focus on smooth problems and compressing DGD-type algorithms for decentralized optimization. The class of smooth objective functions and the sublinear convergence rate under relatively strong assumptions limit these algorithms' application and practical performance. Motivated by primal-dual algorithms, we propose Prox-LEAD, a linear convergent decentralized algorithm with compression, to tackle strongly convex problems with a nonsmooth regularizer. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error without assuming bounded gradients. The superiority of the proposed algorithm is demonstrated through the comparison with state-of-the-art algorithms in terms of convergence complexities and numerical experiments. Our algorithmic framework also generally enlightens the compressed communication on other primal-dual algorithms by reducing the impact of inexact iterations.

Large-Scale and Distributed Optimization

Download Large-Scale and Distributed Optimization PDF Online Free

Author :
Publisher : Springer
ISBN 13 : 3319974785
Total Pages : 412 pages
Book Rating : 4.3/5 (199 download)

DOWNLOAD NOW!


Book Synopsis Large-Scale and Distributed Optimization by : Pontus Giselsson

Download or read book Large-Scale and Distributed Optimization written by Pontus Giselsson and published by Springer. This book was released on 2018-11-11 with total page 412 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents tools and methods for large-scale and distributed optimization. Since many methods in "Big Data" fields rely on solving large-scale optimization problems, often in distributed fashion, this topic has over the last decade emerged to become very important. As well as specific coverage of this active research field, the book serves as a powerful source of information for practitioners as well as theoreticians. Large-Scale and Distributed Optimization is a unique combination of contributions from leading experts in the field, who were speakers at the LCCC Focus Period on Large-Scale and Distributed Optimization, held in Lund, 14th–16th June 2017. A source of information and innovative ideas for current and future research, this book will appeal to researchers, academics, and students who are interested in large-scale optimization.

Provably Efficient Algorithms for Decentralized Optimization

Download Provably Efficient Algorithms for Decentralized Optimization PDF Online Free

Author :
Publisher :
ISBN 13 :
Total Pages : pages
Book Rating : 4.:/5 (13 download)

DOWNLOAD NOW!


Book Synopsis Provably Efficient Algorithms for Decentralized Optimization by : Changxin Liu

Download or read book Provably Efficient Algorithms for Decentralized Optimization written by Changxin Liu and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Decentralized multi-agent optimization has emerged as a powerful paradigm that finds broad applications in engineering design including federated machine learning and control of networked systems. In these setups, a group of agents are connected via a network with general topology. Under the communication constraint, they aim to solving a global optimization problem that is characterized collectively by their individual interests. Of particular importance are the computation and communication efficiency of decentralized optimization algorithms. Due to the heterogeneity of local objective functions, fostering cooperation across the agents over a possibly time-varying network is challenging yet necessary to achieve fast convergence to the global optimum. Furthermore, real-world communication networks are subject to congestion and bandwidth limit. To relieve the difficulty, it is highly desirable to design communication-efficient algorithms that proactively reduce the utilization of network resources. This dissertation tackles four concrete settings in decentralized optimization, and develops four provably efficient algorithms for solving them, respectively. Chapter 1 presents an overview of decentralized optimization, where some preliminaries, problem settings, and the state-of-the-art algorithms are introduced. Chapter 2 introduces the notation and reviews some key concepts that are useful throughout this dissertation. In Chapter 3, we investigate the non-smooth cost-coupled decentralized optimization and a special instance, that is, the dual form of constraint-coupled decentralized optimization. We develop a decentralized subgradient method with double averaging that guarantees the last iterate convergence, which is crucial to solving decentralized dual Lagrangian problems with convergence rate guarantee. Chapter 4 studies the composite cost-coupled decentralized optimization in stochastic networks, for which existing algorithms do not guarantee linear convergence. We propose a new decentralized dual averaging (DDA) algorithm to solve this problem. Under a rather mild condition on stochastic networks, we show that the proposed DDA attains an $\mathcal{O}(1/t)$ rate of convergence in the general case and a global linear rate of convergence if each local objective function is strongly convex. Chapter 5 tackles the smooth cost-coupled decentralized constrained optimization problem. We leverage the extrapolation technique and the average consensus protocol to develop an accelerated DDA algorithm. The rate of convergence is proved to be $\mathcal{O}\left( \frac{1}{t^2}+ \frac{1}{t(1-\beta)^2} \right)$, where $\beta$ denotes the second largest singular value of the mixing matrix. To proactively reduce the utilization of network resources, a communication-efficient decentralized primal-dual algorithm is developed based on the event-triggered broadcasting strategy in Chapter 6. In this algorithm, each agent locally determines whether to generate network transmissions by comparing a pre-defined threshold with the deviation between the iterates at present and lastly broadcast. Provided that the threshold sequence is summable over time, we prove an $\mathcal{O}(1/t)$ rate of convergence for convex composite objectives. For strongly convex and smooth problems, linear convergence is guaranteed if the threshold sequence is diminishing geometrically. Finally, Chapter 7 provides some concluding remarks and research directions for future study.