A study of sequential nonparametric methods emphasizing the unified Martingale approach to the theory, with a detailed explanation of major applications including problems arising in clinical trials, life-testing experimentation, survival analysis, classical sequential analysis and other areas of applied statistics and biostatistics.
Studies the use of scientific computation as a tool in attacking a number of mathematical problems and conjectures. In this case, scientific computation refers primarily to computations that are carried out with a large number of significant digits, for calculations associated with a variety of numerical techniques such as the (second) Remez algorithm in polynomial and rational approximation theory, Richardson extrapolation of sequences of numbers, the accurate finding of zeros of polynomials of large degree, and the numerical approximation of integrals by quadrature techniques. The goal of this book is not to delve into the specialized field dealing with the creation of robust and reliable software needed to implement these high-precision calculations, but rather to emphasize the enormous power that existing software brings to the mathematician's arsenal of weapons for attacking mathematical problems and conjectures. Scientific Computation on Mathematical Problems and Conjectures includes studies of the Bernstein Conjecture of 1913 in polynomial approximation theory, the "1/9" Conjecture of 1977 in rational approximation theory, the famous Riemann Hypothesis of 1859, and the Polya Conjecture of 1927. The emphasis of this monograph rests strongly on the interplay between hard analysis and high-precision calculations.
Presents the elements of a unified approach to optimization based on "nonsmooth analysis," a term introduced in the 1970's by the author, who is a pioneer in the field. Based on a series of lectures given at a conference at Emory University in 1986, this volume presents its subjects in a self-contained and accessible manner. The topics treated here have been in an active state of development, and this work therefore incorporates more recent results than those presented in 1986. Focuses mainly on deterministic optimal control, the calculus of variations, and mathematical programming. In addition, it features a tutorial in nonsmooth analysis and geometry and demonstrates that the method of value function analysis via proximal normals is a powerful tool in the study of necessary conditions, sufficient conditions, controllability, and sensitivity analysis. The distinction between inductive and deductive methods, the use of Hamiltonians, the verification technique, and penalization are also emphasized.
This monograph examines in detail certain concepts that are useful for the modeling of curves and surfaces and emphasizes the mathematical theory that underlies these ideas. The two principal themes of the text are the use of piecewise polynomial representation (this theme appears in one form or another in every chapter), and iterative refinement, also called subdivision. Here, simple iterative geometric algorithms produce, in the limit, curves with complex analytic structure. In the first three chapters, the de Casteljau subdivision for Bernstein-Bezier curves is used to introduce matrix subdivision, and the Lane-Riesenfield algorithm for computing cardinal splines is tied into stationary subdivision. This ultimately leads to the construction of prewavelets of compact support. The remainder of the book deals with concepts of "visual smoothness" of curves, along with the intriguing idea of generating smooth multivariate piecewise polynomials as volumes of "slices" of polyhedra. The final chapter contains an evaluation of polynomials by finite recursive algorithms. Each chapter contains introductory material as well as more advanced results.
Probabilistic Expert Systems emphasizes the basic computational principles that make probabilistic reasoning feasible in expert systems. The key to computation in these systems is the modularity of the probabilistic model. Shafer describes and compares the principal architectures for exploiting this modularity in the computation of prior and posterior probabilities. He also indicates how these similar yet different architectures apply to a wide variety of other problems of recursive computation in applied mathematics and operations research. The field of probabilistic expert systems has continued to flourish since the author delivered his lectures on the topic in June 1992, but the understanding of join-tree architectures has remained missing from the literature. This monograph fills this void by providing an analysis of join-tree methods for the computation of prior and posterior probabilities in belief nets. These methods, pioneered in the mid to late 1980s, continue to be central to the theory and practice of probabilistic expert systems. In addition to purely probabilistic expert systems, join-tree methods are also used in expert systems based on Dempster-Shafer belief functions or on possibility measures. Variations are also used for computation in relational databases, in linear optimization, and in constraint satisfaction. This book describes probabilistic expert systems in a more rigorous and focused way than existing literature, and provides an annotated bibliography that includes pointers to conferences and software. Also included are exercises that will help the reader begin to explore the problem of generalizing from probability to broader domains of recursive computation.
Here is an in-depth, up-to-date analysis of wave interactions for general systems of hyperbolic and viscous conservation laws. This self-contained study of shock waves explains the new wave phenomena from both a physical and a mathematical standpoint. The analysis is useful for the study of various physical situations, including nonlinear elasticity, magnetohydrodynamics, multiphase flows, combustion, and classical gas dynamics shocks. The central issue throughout the book is the understanding of nonlinear wave interactions.
A systematic, self-contained treatment of the theory of stochastic differential equations in infinite dimensional spaces. Included is a discussion of Schwartz spaces of distributions in relation to probability theory and infinite dimensional stochastic analysis, as well as the random variables and stochastic processes that take values in infinite dimensional spaces.
This monograph is based on a series of lectures presented at the 1999 NSF-CBMS Regional Research Conference on Mathematical Analysis of Viscoelastic Flows. It begins with an introduction to phenomena observed in viscoelastic flows, the formulation of mathematical equations to model such flows, and the behavior of various models in simple flows. It also discusses the asymptotics of the high Weissenberg limit, the analysis of flow instabilities, the equations of viscoelastic flows, jets and filaments and their breakup, as well as several other topics.
The applications of the theory of Optimal Control of distributed parameters is an extremely wide field and, although a large number of questions remain open, the whole subject continues to expand very rapidly. The author does not attempt to cover the field but does discuss a number of the more interesting areas of application.
This monograph presents new and elegant proofs of classical results and makes difficult results accessible. The integer programming models known as set packing and set covering have a wide range of applications. Sometimes, owing to the special structure of the constraint matrix, the natural linear programming relaxation yields an optimal solution that is integral, thus solving the problem. Sometimes, both the linear programming relaxation and its dual have integral optimal solutions. Under which conditions do such integrality conditions hold? This question is of both theoretical and practical interest. Min-max theorems, polyhedral combinatorics, and graph theory all come together in this rich area of discrete mathematics. This monograph presents several of these beautiful results as it introduces mathematicians to this active area of research.
Provides a common setting for various methods of bounding the eigenvalues of a self-adjoint linear operator and emphasizes their relationships. A mapping principle is presented to connect many of the methods. The eigenvalue problems studied are linear, and linearization is shown to give important information about nonlinear problems. Linear vector spaces and their properties are used to uniformly describe the eigenvalue problems presented that involve matrices, ordinary or partial differential operators, and integro-differential operators.
Finite elasticity is a theory of elastic materials that are capable of undergoing large deformations. This theory is inherently nonlinear and is mathematically quite complex. This monograph presents a derivation of the basic equations of the theory, a discussion of the general boundary-value problems, and a treatment of several interesting and important special topics such as simple shear, uniqueness, the tensile deformations of a cube, and antiplane shear. The monograph is intended for engineers, physicists, and mathematicians.
Population processes are stochastic models for systems involving a number of similar particles. Examples include models for chemical reactions and for epidemics. The model may involve a finite number of attributes, or even a continuum. This monograph considers approximations that are possible when the number of particles is large. The models considered will involve a finite number of different types of particles.
Surveys the enormous literature on numerical approximation of solutions of elliptic boundary problems by means of variational and finite element methods, requiring almost constant application of results and techniques from functional analysis and approximation theory to the field of numerical analysis.
Results and problems in the modern theory of best approximation, in which the methods of functional analysis are applied in a consequent manner. This modern theory constitutes both a unified foundation for the classical theory of best approximation and a powerful tool for obtaining new results.
A concise survey of the current state of knowledge in 1972 about solving elliptic boundary-value eigenvalue problems with the help of a computer. This volume provides a case study in scientific computing?the art of utilizing physical intuition, mathematical theorems and algorithms, and modern computer technology to construct and explore realistic models of problems arising in the natural sciences and engineering.
This second edition provides much-needed updates to the original volume. Like the first edition, it emphasizes the ideas behind the algorithms as well as their theoretical foundations and properties, rather than focusing strictly on computational details; at the same time, this new version is now largely self-contained and includes essential proofs. Additions have been made to almost every chapter, including an introduction to the theory of inexact Newton methods, a basic theory of continuation methods in the setting of differentiable manifolds, and an expanded discussion of minimization methods. New information on parametrized equations and continuation incorporates research since the first edition.
A treatment of the convergence of probability measures from the foundations to applications in limit theory for dependent random variables. Mapping theorems are proved via Skorokhod's representation theorem; Prokhorov's theorem is proved by construction of a content. The limit theorems at the conclusion are proved under a new set of conditions that apply fairly broadly, but at the same time make possible relatively simple proofs.
Presents a coherent body of theory for the derivation of the sampling distributions of a wide range of test statistics. Emphasis is on the development of practical techniques. A unified treatment of the theory was attempted, e.g., the author sought to relate the derivations for tests on the circle and the two-sample problem to the basic theory for the one-sample problem on the line. The Markovian nature of the sample distribution function is stressed, as it accounts for the elegance of many of the results achieved, as well as the close relation with parts of the theory of stochastic processes.
Here is a brief, well-organized, and easy-to-follow introduction and overview of robust statistics. Huber focuses primarily on the important and clearly understood case of distribution robustness, where the shape of the true underlying distribution deviates slightly from the assumed model (usually the Gaussian law). An additional chapter on recent developments in robustness has been added and the reference list has been expanded and updated from the 1977 edition.
Explores modern topics in graph theory and its applications to problems in transportation, genetics, pollution, perturbed ecosystems, urban services, and social inequalities. The author presents both traditional and relatively atypical graph-theoretical topics to best illustrate applications.
A study of how complexity questions in computing interact with classical mathematics in the numerical analysis of issues in algorithm design. Algorithmic designers concerned with linear and nonlinear combinatorial optimization will find this volume especially useful. Two algorithms are studied in detail: the ellipsoid method and the simultaneous diophantine approximation method. Although both were developed to study, on a theoretical level, the feasibility of computing some specialized problems in polynomial time, they appear to have practical applications. The book first describes use of the simultaneous diophantine method to develop sophisticated rounding procedures. Then a model is described to compute upper and lower bounds on various measures of convex bodies. Use of the two algorithms is brought together by the author in a study of polyhedra with rational vertices. The book closes with some applications of the results to combinatorial optimization.
This update of the 1987 title of the same name is an examination of what is currently known about the probabilistic method, written by one of its principal developers. Based on the notes from Spencer's 1986 series of ten lectures, this new edition contains an additional lecture: The Janson inequalities. These inequalities allow accurate approximation of extremely small probabilities. A new algorithmic approach to the Lovasz Local Lemma, attributed to Jozsef Beck, has been added to Lecture 8, as well. Throughout the monograph, Spencer retains the informal style of his original lecture notes and emphasizes the methodology, shunning the more technical "best possible" results in favor of clearer exposition. The book is not encyclopedic--it contains only those examples that clearly display the methodology. The probabilistic method is a powerful tool in graph theory, combinatorics, and theoretical computer science. It allows one to prove the existence of objects with certain properties (e.g., colorings) by showing that an appropriately defined random object has positive probability of having those properties.
This monograph provides an introduction to the state of the art of the probability theory that is most directly applicable to combinatorial optimization. The questions that receive the most attention are those that deal with discrete optimization problems for points in Euclidean space, such as the minimum spanning tree, the traveling-salesman tour, and minimal-length matchings. Still, there are several nongeometric optimization problems that receive full treatment, and these include the problems of the longest common subsequence and the longest increasing subsequence. The philosophy that guides the exposition is that analysis of concrete problems is the most effective way to explain even the most general methods or abstract principles. There are three fundamental probabilistic themes that are examined through our concrete investigations. First, there is a systematic exploitation of martingales. The second theme that is explored is the systematic use of subadditivity of several flavors, ranging from the na?ve subadditivity of real sequences to the subtler subadditivity of stochastic processes. The third and deepest theme developed here concerns the application of Talagrand's isoperimetric theory of concentration inequalities.
Focuses on finding the minimum number of arithmetic operations needed to perform the computation and on finding a better algorithm when improvement is possible. The author concentrates on that class of problems concerned with computing a system of bilinear forms. Results that lead to applications in the area of signal processing are emphasized, since (1) even a modest reduction in the execution time of signal processing problems could have practical significance; (2) results in this area are relatively new and are scattered in journal articles; and (3) this emphasis indicates the flavor of complexity of computation.
An introduction to aspects of the theory of dynamial systems based on extensions of Liapunov's direct method. The main ideas and structure for the theory are presented for difference equations and for the analogous theory for ordinary differential equations and retarded functional differential equations. The latest results on invariance properties for non-autonomous time-varying systems processes are presented for difference and differential equations.
A study of those statistical ideas that use a probability distribution over parameter space. The first part describes the axiomatic basis in the concept of coherence and the implications of this for sampling theory statistics. The second part discusses the use of Bayesian ideas in many branches of statistics.
This book presents a theory of indexing capable of ranking index terms, or subject identifiers in decreasing order of importance. This leads to the choice of good document representations, and also accounts for the role of phrases and of thesaurus classes in the indexing process. This study is typical of theoretical work in automatic information organization and retrieval, in that concepts are used from mathematics, computer science, and linguistics. A complete theory of information retrieval may emerge from an appropriate combination of these three disciplines.
Originally presented as lectures, the theme of this volume is that one studies orthogonal polynomials and special functions not for their own sake, but to be able to use them to solve problems. The author presents problems suggested by the isometric embedding of projective spaces in other projective spaces, by the desire to construct large classes of univalent functions, by applications to quadrature problems, and theorems on the location of zeros of trigonometric polynomials. There are also applications to combinatorial problems, statistics, and physical problems.
Provides a relatively brief introduction to conjugate duality in both finite- and infinite-dimensional problems. An emphasis is placed on the fundamental importance of the concepts of Lagrangian function, saddle-point, and saddle-value. General examples are drawn from nonlinear programming, approximation, stochastic programming, the calculus of variations, and optimal control.
Tremendous progress has taken place in the related areas of uniform pseudorandom number generation and quasi-Monte Carlo methods in the last five years. This volume contains recent important work in these two areas, and stresses the interplay between them. Some developments contained here have never before appeared in book form. Includes the discussion of the integrated treatment of pseudorandom numbers and quasi-Monte Carlo methods; the systematic development of the theory of lattice rules and the theory of nets and (t,s)-sequences; the construction of new and better low-discrepancy point sets and sequences; Nonlinear congruential methods; the initiation of a systematic study of methods for pseudorandom vector generation; and shift-register pseudorandom numbers. Based on a series of 10 lectures presented by the author at a CBMS-NSF Regional Conference at the University of Alaska at Fairbanks in 1990 to a selected group of researchers, this volume includes background material to make the information more accessible to nonspecialists.
This book serves well as an introduction into the more theoretical aspects of the use of spline models. It develops a theory and practice for the estimation of functions from noisy data on functionals. The simplest example is the estimation of a smooth curve, given noisy observations on a finite number of its values. The estimate is a polynomial smoothing spline. By placing this smoothing problem in the setting of reproducing kernel Hilbert spaces, a theory is developed which includes univariate smoothing splines, thin plate splines in d dimensions, splines on the sphere, additive splines, and interaction splines in a single framework. A straightforward generalization allows the theory to encompass the very important area of (Tikhonov) regularization methods for ill-posed inverse problems. Convergence properties, data based smoothing parameter selection, confidence intervals, and numerical methods are established which are appropriate to a wide variety of problems which fall within this framework. Methods for including side conditions and other prior information in solving ill-posed inverse problems are included. Data which involves samples of random variables with Gaussian, Poisson, binomial, and other distributions are treated in a unified optimization context. Experimental design questions, i.e., which functionals should be observed, are studied in a general context. Extensions to distributed parameter system identification problems are made by considering implicitly defined functionals.
There has been an explosive growth in the field of combinatorial algorithms. These algorithms depend not only on results in combinatorics and especially in graph theory, but also on the development of new data structures and new techniques for analyzing algorithms. Four classical problems in network optimization are covered in detail, including a development of the data structures they use and an analysis of their running time. Data Structures and Network Algorithms attempts to provide the reader with both a practical understanding of the algorithms, described to facilitate their easy implementation, and an appreciation of the depth and beauty of the field of graph algorithms.
The jackknife and the bootstrap are nonparametric methods for assessing the errors in a statistical estimation problem. They provide several advantages over the traditional parametric approach: the methods are easy to describe and they apply to arbitrarily complicated situations; distribution assumptions, such as normality, are never made. This monograph connects the jackknife, the bootstrap, and many other related ideas such as cross-validation, random subsampling, and balanced repeated replications into a unified exposition. The theoretical development is at an easy mathematical level and is supplemented by a large number of numerical examples. The methods described in this monograph form a useful set of tools for the applied statistician. They are particularly useful in problem areas where complicated data structures are common, for example, in censoring, missing data, and highly multivariate situations.
Wavelets are a mathematical development that may revolutionize the world of information storage and retrieval according to many experts. They are a fairly simple mathematical tool now being applied to the compression of data--such as fingerprints, weather satellite photographs, and medical x-rays--that were previously thought to be impossible to condense without losing crucial details. This monograph contains 10 lectures presented by Dr. Daubechies as the principal speaker at the 1990 CBMS-NSF Conference on Wavelets and Applications. The author has worked on several aspects of the wavelet transform and has developed a collection of wavelets that are remarkably efficient.