Now a part of SIAM's Classics series, these volumes contain a large number of concrete, interesting examples of boundary value problems for partial differential equations that cover a variety of applications that are still relevant today. For example, there is substantial treatment of the Helmholtz equation and scattering theorysubjects that play a central role in contemporary inverse problems in acoustics and electromagnetic theory.
Suitable for advanced undergraduates and graduate students in mathematics, this introductory treatment is largely self-contained. Topics include Fourier series, sufficient conditions, the Laplace transform, results of Doetsch and Kober-Erdelyi, Gaussian sums, and Euler's formulas and functional equations. Additional subjects include partial fractions, mock theta functions, Hermite's method, convergence proof, elementary functional relations, multidimensional Poisson summation formula, the modular transformation, and many other areas.
This book discusses the gradient mappings and minimization, contractions and the continuation property, and degree of a mapping. The general iterative and minimization methods, rates of convergence, and one-step stationary and multistep methods are also elaborated. This text likewise covers the contractions and nonlinear majorants, convergence under partial ordering, and convergence of minimization methods.
This publication is a good reference for specialists and readers with an extensive functional analysis background.
This book is comprised of six chapters and begins with an overview of a few simple facts about feedback systems and simple examples of nonlinear systems that illustrate the important distinction between the questions of existence, uniqueness, continuous dependence, and boundedness with respect to bounded input and output. The next chapter describes a number of useful properties of norms and induced norms and of normed spaces. Several theorems are then presented, along with the main results concerning linear systems. These results are used to illustrate the applications of the small gain theorem to different classes of systems. The final chapter outlines the framework necessary to discuss passivity and demonstrate the applications of the passivity theorem.
This monograph will be a useful resource for mathematically inclined engineers interested in feedback systems, as well as undergraduate engineering students.
An introductory chapter covers the Lanczos algorithm, orthogonal polynomials, and determinantal identities. Succeeding chapters examine norms, bounds, and convergence; localization theorems and other inequalities; and methods of solving systems of linear equations. The final chapters illustrate the mathematical principles underlying linear equations and their interrelationships. Topics include methods of successive approximation, direct methods of inversion, normalization and reduction of the matrix, and proper values and vectors. Each chapter concludes with a helpful set of references and problems.
On the one hand, it is also intended to be a working textbook for advanced courses in Numerical Analysis, as typically taught in graduate courses in American and French universities. For example, it is the author’s experience that a one-semester course (on a three-hour per week basis) can be taught from Chapters 1, 2 and 3 (with the exception of Section 3.3), while another one-semester course can be taught from Chapters 4 and 6.
On the other hand, it is hoped that this book will prove to be useful for researchers interested in advanced aspects of the numerical analysis of the finite element method. In this respect, Section 3.3, Chapters 5, 7 and 8, and the sections on “Additional Bibliography and Comments should provide many suggestions for conducting seminars.
Matrix computations lie at the heart of most scientific computational tasks. For any scientist or engineer doing large-scale simulations, an understanding of the topic is essential. Fundamentals of Matrix Computations, Second Edition explains matrix computations and the accompanying theory clearly and in detail, along with useful insights.
This Second Edition of a popular text has now been revised and improved to appeal to the needs of practicing scientists and graduate and advanced undergraduate students. New to this edition is the use of MATLAB for many of the exercises and examples, although the Fortran exercises in the First Edition have been kept for those who want to use them. This new edition includes:
* Numerous examples and exercises on applications including electrical circuits, elasticity (mass-spring systems), and simple partial differential equations
* Early introduction of the singular value decomposition
* A new chapter on iterative methods, including the powerful preconditioned conjugate-gradient method for solving symmetric, positive definite systems
* An introduction to new methods for solving large, sparse eigenvalue problems including the popular implicitly-restarted Arnoldi and Jacobi-Davidson methods
With in-depth discussions of such other topics as modern componentwise error analysis, reorthogonalization, and rank-one updates of the QR decomposition, Fundamentals of Matrix Computations, Second Edition will prove to be a versatile companion to novice and practicing mathematicians who seek mastery of matrix computation.
This revision is a cover-to-cover expansion and renovation of the third edition. It now includes an introduction to tensor computations and brand new sections on • fast transforms• parallel LU• discrete Poisson solvers• pseudospectra• structured linear equation problems• structured eigenvalue problems• large-scale SVD methods• polynomial eigenvalue problems
Matrix Computations is packed with challenging problems, insightful derivations, and pointers to the literature—everything needed to become a matrix-savvy developer of numerical methods and software.
New to this edition is a chapter devoted to Conic Linear Programming, a powerful generalization of Linear Programming. Indeed, many conic structures are possible and useful in a variety of applications. It must be recognized, however, that conic linear programming is an advanced topic, requiring special study. Another important topic is an accelerated steepest descent method that exhibits superior convergence properties, and for this reason, has become quite popular. The proof of the convergence property for both standard and accelerated steepest descent methods are presented in Chapter 8. As in previous editions, end-of-chapter exercises appear for all chapters.
From the reviews of the Third Edition:
“... this very well-written book is a classic textbook in Optimization. It should be present in the bookcase of each student, researcher, and specialist from the host of disciplines from which practical optimization applications are drawn.” (Jean-Jacques Strodiot, Zentralblatt MATH, Vol. 1207, 2011)