Handbook of Computational Statistics: Concepts and Methods, Edition 2

Springer Science & Business Media
1
Free sample

The Handbook of Computational Statistics - Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics. This new edition is divided into 4 parts in the same way as the first edition. It begins with "How Computational Statistics became the backbone of modern data science" (Ch.1): an overview of the field of Computational Statistics, how it emerged as a separate discipline, and how its own development mirrored that of hardware and software, including a discussion of current active research. The second part (Chs. 2 - 15) presents several topics in the supporting field of statistical computing. Emphasis is placed on the need for fast and accurate numerical algorithms, and some of the basic methodologies for transformation, database handling, high-dimensional data and graphics treatment are discussed. The third part (Chs. 16 - 33) focuses on statistical methodology. Special attention is given to smoothing, iterative procedures, simulation and visualization of multivariate data. Lastly, a set of selected applications (Chs. 34 - 38) like Bioinformatics, Medical Imaging, Finance, Econometrics and Network Intrusion Detection highlight the usefulness of computational statistics in real-world applications.
Read more

About the author

James E. Gentle is a Professor of Computational Statistics at George Mason University. His research interests include Monte Carlo methods and computational finance. He is an elected member of the ISI and a Fellow of the American Statistical Association.

Wolfgang Karl Härdle is a Professor of Statistics at the Humboldt-Universität zu Berlin and the Director of CASE – the Centre for Applied Statistics and Economics. He teaches quantitative finance and semi-parametric statistical methods. His research focuses on dynamic factor models, multivariate statistics in finance and computational statistics. He is an elected member of the ISI and an advisor to the Guanghua School of Management, Peking University and to National Central University, Taiwan.

Yuichi Mori is a Professor of Statistics and Informatics at Okayama University of Science. His research interests include efficient computing in multivariate methods, dimension reduction and variable selection, and statistics education. He is an elected member of the ISI and served as a council member of the IASC from 2003 to 2007.

Read more
5.0
1 total
Loading...

Additional Information

Publisher
Springer Science & Business Media
Read more
Published on
Jul 6, 2012
Read more
Pages
1192
Read more
ISBN
9783642215513
Read more
Language
English
Read more
Genres
Computers / Mathematical & Statistical Software
Mathematics / Probability & Statistics / General
Mathematics / Probability & Statistics / Stochastic Processes
Read more
Content Protection
This content is DRM protected.
Read more

Reading information

Smartphones and Tablets

Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.

Laptops and Computers

You can read books purchased on Google Play using your computer's web browser.

eReaders and other devices

To read on e-ink devices like the Sony eReader or Barnes & Noble Nook, you'll need to download a file and transfer it to your device. Please follow the detailed Help center instructions to transfer the files to supported eReaders.
Matrix algebra is one of the most important areas of mathematics for data analysis and for statistical theory. The first part of this book presents the relevant aspects of the theory of matrix algebra for applications in statistics. This part begins with the fundamental concepts of vectors and vector spaces, next covers the basic algebraic properties of matrices, then describes the analytic properties of vectors and matrices in the multivariate calculus, and finally discusses operations on matrices in solutions of linear systems and in eigenanalysis. This part is essentially self-contained.

The second part of the book begins with a consideration of various types of matrices encountered in statistics, such as projection matrices and positive definite matrices, and describes the special properties of those matrices. The second part also describes some of the many applications of matrix theory in statistics, including linear models, multivariate analysis, and stochastic processes. The brief coverage in this part illustrates the matrix theory developed in the first part of the book. The first two parts of the book can be used as the text for a course in matrix algebra for statistics students, or as a supplementary text for various courses in linear models or multivariate statistics.

The third part of this book covers numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes accurate and efficient algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. This part is essentially self-contained, although it assumes some ability to program in Fortran or C and/or the ability to use R/S-Plus or Matlab. This part of the book can be used as the text for a course in statistical computing, or as a supplementary text for various courses that emphasize computations.

The book includes a large number of exercises with some solutions provided in an appendix.

An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform.

Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

Numerical linear algebra is one of the most important subjects in the field of statistical computing. Statistical methods in many areas of application require computations with vectors and matrices. This book describes accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. An understanding of numerical linear algebra requires basic knowledge both of linear algebra and of how numerical data are stored and manipulated in the computer. The book begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, matrix factorizations, matrix and vector norms, and other topics in linear algebra; hence, the book is essentially self- contained. The topics addressed in this book constitute the most important material for an introductory course in statistical computing, and should be covered in every such course. The book includes exercises and can be used as a text for a first course in statistical computing or as supplementary text for various courses that emphasize computations. James Gentle is University Professor of Computational Statistics at George Mason University. During a thirteen-year hiatus from academic work before joining George Mason, he was director of research and design at the world's largest independent producer of Fortran and C general-purpose scientific software libraries. These libraries implement many algorithms for numerical linear algebra. He is a Fellow of the American Statistical Association and member of the International Statistical Institute. He has held several national
In recent years developments in statistics have to a great extent gone hand in hand with developments in computing. Indeed, many of the recent advances in statistics have been dependent on advances in computer science and techn- ogy. Many of the currently interesting statistical methods are computationally intensive, eitherbecausetheyrequireverylargenumbersofnumericalcompu- tions or because they depend on visualization of many projections of the data. The class of statistical methods characterized by computational intensity and the supporting theory for such methods constitute a discipline called “com- tational statistics”. (Here, I am following Wegman, 1988, and distinguishing “computationalstatistics”from“statisticalcomputing”, whichwetaketomean “computational methods, including numerical analysis, for statisticians”.) The computationally-intensive methods of modern statistics rely heavily on the developments in statistical computing and numerical analysis generally. Computational statistics shares two hallmarks with other “computational” sciences, such as computational physics, computational biology, and so on. One is a characteristic of the methodology: it is computationally intensive. The other is the nature of the tools of discovery. Tools of the scienti?c method have generally been logical deduction (theory) and observation (experimentation). The computer, used to explore large numbers of scenarios, constitutes a new type of tool. Use of the computer to simulate alternatives and to present the research worker with information about these alternatives is a characteristic of thecomputationalsciences. Insomewaysthisusageisakintoexperimentation. The observations, however, are generated from an assumed model, and those simulated data are used to evaluate and study the model.
Matrix algebra is one of the most important areas of mathematics for data analysis and for statistical theory. The first part of this book presents the relevant aspects of the theory of matrix algebra for applications in statistics. This part begins with the fundamental concepts of vectors and vector spaces, next covers the basic algebraic properties of matrices, then describes the analytic properties of vectors and matrices in the multivariate calculus, and finally discusses operations on matrices in solutions of linear systems and in eigenanalysis. This part is essentially self-contained.

The second part of the book begins with a consideration of various types of matrices encountered in statistics, such as projection matrices and positive definite matrices, and describes the special properties of those matrices. The second part also describes some of the many applications of matrix theory in statistics, including linear models, multivariate analysis, and stochastic processes. The brief coverage in this part illustrates the matrix theory developed in the first part of the book. The first two parts of the book can be used as the text for a course in matrix algebra for statistics students, or as a supplementary text for various courses in linear models or multivariate statistics.

The third part of this book covers numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes accurate and efficient algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. This part is essentially self-contained, although it assumes some ability to program in Fortran or C and/or the ability to use R/S-Plus or Matlab. This part of the book can be used as the text for a course in statistical computing, or as a supplementary text for various courses that emphasize computations.

The book includes a large number of exercises with some solutions provided in an appendix.

©2018 GoogleSite Terms of ServicePrivacyDevelopersArtistsAbout Google|Location: United StatesLanguage: English (United States)
By purchasing this item, you are transacting with Google Payments and agreeing to the Google Payments Terms of Service and Privacy Notice.