Principal Component Analysis: Edition 2

Springer Science & Business Media
Free sample

Principal component analysis is central to the study of multivariate data. Although one of the earliest multivariate techniques it continues to be the subject of much research, ranging from new model- based approaches to algorithmic ideas from neural networks. It is extremely versatile with applications in many disciplines. The first edition of this book was the first comprehensive text written solely on principal component analysis. The second edition updates and substantially expands the original version, and is once again the definitive text on the subject. It includes core material, current research and a wide range of applications. Its length is nearly double that of the first edition. Researchers in statistics, or in other fields that use principal component analysis, will find that the book gives an authoritative yet accessible account of the subject. It is also a valuable resource for graduate courses in multivariate analysis. The book requires some knowledge of matrix algebra. Ian Jolliffe is Professor of Statistics at the University of Aberdeen. He is author or co-author of over 60 research papers and three other books. His research interests are broad, but aspects of principal component analysis have fascinated him and kept him busy for over 30 years.
Read more

Additional Information

Springer Science & Business Media
Read more
Published on
May 9, 2006
Read more
Read more
Read more
Read more
Best For
Read more
Read more
Mathematics / Probability & Statistics / General
Mathematics / Probability & Statistics / Stochastic Processes
Read more
Content Protection
This content is DRM protected.
Read more

Reading information

Smartphones and Tablets

Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.

Laptops and Computers

You can read books purchased on Google Play using your computer's web browser.

eReaders and other devices

To read on e-ink devices like the Sony eReader or Barnes & Noble Nook, you'll need to download a file and transfer it to your device. Please follow the detailed Help center instructions to transfer the files to supported eReaders.

This book provides an accessible presentation of concepts from probability theory, statistical methods, the design of experiments and statistical quality control. It is shaped by the experience of the two teachers teaching statistical methods and concepts to engineering students, over a decade. Practical examples and end-of-chapter exercises are the highlights of the text as they are purposely selected from different fields. Statistical principles discussed in the book have great relevance in several disciplines like economics, commerce, engineering, medicine, health-care, agriculture, biochemistry, and textiles to mention a few. A large number of students with varied disciplinary backgrounds need a course in basics of statistics, the design of experiments and statistical quality control at an introductory level to pursue their discipline of interest. No previous knowledge of probability or statistics is assumed, but an understanding of calculus is a prerequisite. The whole book serves as a master level introductory course in all the three topics, as required in textile engineering or industrial engineering.

Organised into 10 chapters, the book discusses three different courses namely statistics, the design of experiments and quality control. Chapter 1 is the introductory chapter which describes the importance of statistical methods, the design of experiments and statistical quality control. Chapters 2–6 deal with statistical methods including basic concepts of probability theory, descriptive statistics, statistical inference, statistical test of hypothesis and analysis of correlation and regression. Chapters 7–9 deal with the design of experiments including factorial designs and response surface methodology, and Chap. 10 deals with statistical quality control.

An incomparably useful examination of statistical methods for comparison
The nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not independent of the outcome.
Statistical Group Comparison brings together a broad range of statistical methods for comparison developed over recent years. The book covers a wide spectrum of topics from the simplest comparison of two means or rates to more recently developed statistics including double generalized linear models and Bayesian as well as hierarchical methods. Coverage includes:
* Testing parameter equality in linear regression and other generalized linear models (GLMs), in order of increasing complexity
* Likelihood ratio, Wald, and Lagrange multiplier statistics examined where applicable
* Group comparisons involving latent variables in structural equation modeling
* Models of comparison for categorical latent variables
Examples are drawn from the social, political, economic, and biomedical sciences; many can be implemented using widely available software. Because of the range and the generality of the statistical methods covered, researchers across many disciplines-beyond the social, political, economic, and biomedical sciences-will find the book a convenient reference for many a research situation where comparisons may come naturally.
Modern Mathematical Statistics with Applications, Second Edition strikes a balance between mathematical foundations and statistical practice. In keeping with the recommendation that every math student should study statistics and probability with an emphasis on data analysis, accomplished authors Jay Devore and Kenneth Berk make statistical concepts and methods clear and relevant through careful explanations and a broad range of applications involving real data.

The main focus of the book is on presenting and illustrating methods of inferential statistics that are useful in research. It begins with a chapter on descriptive statistics that immediately exposes the reader to real data. The next six chapters develop the probability material that bridges the gap between descriptive and inferential statistics. Point estimation, inferences based on statistical intervals, and hypothesis testing are then introduced in the next three chapters. The remainder of the book explores the use of this methodology in a variety of more complex settings.

This edition includes a plethora of new exercises, a number of which are similar to what would be encountered on the actuarial exams that cover probability and statistics. Representative applications include investigating whether the average tip percentage in a particular restaurant exceeds the standard 15%, considering whether the flavor and aroma of Champagne are affected by bottle temperature or type of pour, modeling the relationship between college graduation rate and average SAT score, and assessing the likelihood of O-ring failure in space shuttle launches as related to launch temperature.

An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform.

Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

Data Science gets thrown around in the press like it's magic. Major retailers are predicting everything from when their customers are pregnant to when they want a new pair of Chuck Taylors. It's a brave new world where seemingly meaningless data can be transformed into valuable insight to drive smart business decisions.

But how does one exactly do data science? Do you have to hire one of these priests of the dark arts, the "data scientist," to extract this gold from your data? Nope.

Data science is little more than using straight-forward steps to process raw data into actionable insight. And in Data Smart, author and data scientist John Foreman will show you how that's done within the familiar environment of a spreadsheet.

Why a spreadsheet? It's comfortable! You get to look at the data every step of the way, building confidence as you learn the tricks of the trade. Plus, spreadsheets are a vendor-neutral place to learn data science without the hype.

But don't let the Excel sheets fool you. This is a book for those serious about learning the analytic techniques, the math and the magic, behind big data.

Each chapter will cover a different technique in a spreadsheet so you can follow along:

Mathematical optimization, including non-linear programming and genetic algorithms Clustering via k-means, spherical k-means, and graph modularity Data mining in graphs, such as outlier detection Supervised AI through logistic regression, ensemble models, and bag-of-words models Forecasting, seasonal adjustments, and prediction intervals through monte carlo simulation Moving from spreadsheets into the R programming language

You get your hands dirty as you work alongside John through each technique. But never fear, the topics are readily applicable and the author laces humor throughout. You'll even learn what a dead squirrel has to do with optimization modeling, which you no doubt are dying to know.

©2019 GoogleSite Terms of ServicePrivacyDevelopersArtistsAbout Google|Location: United StatesLanguage: English (United States)
By purchasing this item, you are transacting with Google Payments and agreeing to the Google Payments Terms of Service and Privacy Notice.