The author compares the more important methods in terms of their theoretical inter-relationships and their practical merits. He also considers two other general forecasting topics that have been somewhat neglected in the literature: the computation of prediction intervals and the effect of model uncertainty on forecast accuracy.
Although the search for a "best" method continues, it is now well established that no single method will outperform all other methods in all situations-the context is crucial. Time-Series Forecasting provides an outstanding reference source for the more generally applicable methods particularly useful to researchers and practitioners in forecasting in the areas of economics, government, industry, and commerce.
For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.
And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.
“[Taleb is] Wall Street’s principal dissident. . . . [Fooled By Randomness] is to conventional Wall Street wisdom approximately what Martin Luther’s ninety-nine theses were to the Catholic Church.”
–Malcolm Gladwell, The New Yorker
Finally in paperback, the word-of-mouth sensation that will change the way you think about the markets and the world.This book is about luck: more precisely how we perceive luck in our personal and professional experiences.
Set against the backdrop of the most conspicuous forum in which luck is mistaken for skill–the world of business–Fooled by Randomness is an irreverent, iconoclastic, eye-opening, and endlessly entertaining exploration of one of the least understood forces in all of our lives.
From the Trade Paperback edition.
This textbook provides a comprehensive introduction to forecasting methods and presents enough information about each method for readers to use them sensibly.
"It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006)
A complete and comprehensive classic in probability and measure theory
Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this Anniversary Edition builds on its strong foundation of measure theory and probability with Billingsley's unique writing style. In recognition of 35 years of publication, impacting tens of thousands of readers, this Anniversary Edition has been completely redesigned in a new, open and user-friendly way in order to appeal to university-level students.
This book adds a new foreward by Steve Lally of the Statistics Department at The University of Chicago in order to underscore the many years of successful publication and world-wide popularity and emphasize the educational value of this book. The Anniversary Edition contains features including:An improved treatment of Brownian motion Replacement of queuing theory with ergodic theory Theory and applications used to illustrate real-life situations Over 300 problems with corresponding, intensive notes and solutions Updated bibliography An extensive supplement of additional notes on the problems and chapter commentaries
Patrick Billingsley was a first-class, world-renowned authority in probability and measure theory at a leading U.S. institution of higher education. He continued to be an influential probability theorist until his unfortunate death in 2011. Billingsley earned his Bachelor's Degree in Engineering from the U.S. Naval Academy where he served as an officer. he went on to receive his Master's Degree and doctorate in Mathematics from Princeton University.Among his many professional awards was the Mathematical Association of America's Lester R. Ford Award for mathematical exposition. His achievements through his long and esteemed career have solidified Patrick Billingsley's place as a leading authority in the field and been a large reason for his books being regarded as classics.
This Anniversary Edition of Probability and Measure offers advanced students, scientists, and engineers an integrated introduction to measure theory and probability. Like the previous editions, this Anniversary Edition is a key resource for students of mathematics, statistics, economics, and a wide variety of disciplines that require a solid understanding of probability theory.
“The book follows faithfully the style of the original edition. The approach is heavily motivated by real-world time series, and by developing a complete approach to model building, estimation, forecasting and control."
- Mathematical Reviews
Bridging classical models and modern topics, the Fifth Edition of Time Series Analysis: Forecasting and Control maintains a balanced presentation of the tools for modeling and analyzing time series. Also describing the latest developments that have occurred in the field over the past decade through applications from areas such as business, finance, and engineering, the Fifth Edition continues to serve as one of the most influential and prominent works on the subject.
Time Series Analysis: Forecasting and Control, Fifth Edition provides a clearly written exploration of the key methods for building, classifying, testing, and analyzing stochastic models for time series and describes their use in five important areas of application: forecasting; determining the transfer function of a system; modeling the effects of intervention events; developing multivariate dynamic models; and designing simple control schemes. Along with these classical uses, the new edition covers modern topics with new features that include:A redesigned chapter on multivariate time series analysis with an expanded treatment of Vector Autoregressive, or VAR models, along with a discussion of the analytical tools needed for modeling vector time series An expanded chapter on special topics covering unit root testing, time-varying volatility models such as ARCH and GARCH, nonlinear time series models, and long memory models Numerous examples drawn from finance, economics, engineering, and other related fields The use of the publicly available R software for graphical illustrations and numerical calculations along with scripts that demonstrate the use of R for model building and forecasting Updates to literature references throughout and new end-of-chapter exercises Streamlined chapter introductions and revisions that update and enhance the exposition Time Series Analysis: Forecasting and Control, Fifth Edition is a valuable real-world reference for researchers and practitioners in time series analysis, econometrics, finance, and related fields. The book is also an excellent textbook for beginning graduate-level courses in advanced statistics, mathematics, economics, finance, engineering, and physics.
Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates
New York Times Bestseller
A former Wall Street quant sounds an alarm on the mathematical models that pervade modern life — and threaten to rip apart our social fabric
We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.
But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.
Tracing the arc of a person’s life, O’Neil exposes the black box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.
O’Neil calls on modelers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
— Longlist for National Book Award (Non-Fiction)
— Goodreads, semi-finalist for the 2016 Goodreads Choice Awards (Science and Technology)
— Kirkus, Best Books of 2016
— New York Times, 100 Notable Books of 2016 (Non-Fiction)
— The Guardian, Best Books of 2016
— WBUR's "On Point," Best Books of 2016: Staff Picks
— Boston Globe, Best Books of 2016, Non-Fiction
Complexity surrounds us. We have too much email, juggle multiple remotes, and hack through thickets of regulations from phone contracts to health plans. But complexity isn’t destiny. Sull and Eisenhardt argue there’s a better way. By developing a few simple yet effective rules, people can best even the most complex problems.
In Simple Rules, Sull and Eisenhardt masterfully challenge how we think about complexity and offer a new lens on how to cope. They take us on a surprising tour of what simple rules are, where they come from, and why they work. The authors illustrate the six kinds o f rules that really matter - for helping artists find creativity and the Federal Reserve set interest rates, for keeping birds on track and Zipcar members organized, and for how insomniacs can sleep and mountain climbers stay safe.
Drawing on rigorous research and riveting stories, the authors ingeniously find insights in unexpected places, from the way Tina Fey codified her experience at Saturday Night Live into rules for producing 30 Rock (rule five: never tell a crazy person he’s crazy) to burglars’ rules for robbery (“avoid houses with a car parked outside”) to Japanese engineers mimicking the rules of slime molds to optimize Tokyo’s rail system. The authors offer fresh information and practical tips on fixing old rules and learning new ones.
Whether you’re struggling with information overload, pursuing opportunities with limited resources, or just trying to change your bad habits, Simple Rules provides powerful insight into how and why simplicity tames complexity.
This book shows you how to validate your initial idea, find the right customers, decide what to build, how to monetize your business, and how to spread the word. Packed with more than thirty case studies and insights from over a hundred business experts, Lean Analytics provides you with hard-won, real-world information no entrepreneur can afford to go without.Understand Lean Startup, analytics fundamentals, and the data-driven mindsetLook at six sample business models and how they map to new ventures of all sizesFind the One Metric That Matters to youLearn how to draw a line in the sand, so you’ll know it’s time to move forwardApply Lean Analytics principles to large enterprises and established products
In the late 1980s, Japanese scientists were trying to figure out the economic damage that would be caused if a catastrophic earthquake destroyed Tokyo. The answer was bleak, but not for Japan. Kaoru Oda, an economist who worked for Tokai Bank, speculated that the United States would end up paying the most. Why? Japan owned trillions of dollars’ worth of foreign liquid assets and investments. These assets, which the world depended on, would be sold, forcing countries into the precarious position of having to return large amounts of money they might not have. After the recent earthquake, Michael Lewis reexamined this hypothesis and came to a surprising conclusion. With his characteristic sense of humor and wit, Lewis, once again, explains the inner workings of a financial catastrophe.
“How a Tokyo Earthquake Could Devastate Wall Street” appears in Michael Lewis’s book The Money Culture.
“The leading indicators” shape our lives intimately, but few of us know where these numbers come from, what they mean, or why they rule the world. GDP, inflation, unemployment, trade, and a host of averages determine whether we feel optimistic or pessimistic about the country’s future and our own. They dictate whether businesses hire and invest, or fire and hunker down, whether governments spend trillions or try to reduce debt, whether individuals marry, buy a car, get a mortgage, or look for a job.
Zachary Karabell tackles the history and the limitations of each of our leading indicators. The solution is not to invent new indicators, but to become less dependent on a few simple figures and tap into the data revolution. We have unparalleled power to find the information we need, but only if we let go of the outdated indicators that lead and mislead us.
New to the fourth edition are the topics of common and special causes, outliers, and risk management tools. Besides the new topics, many current topics have been expanded to reflect changes in auditing practices since 2004 and ISO 19011 guidance, and they have been rewritten to promote the common elements of all types of system and process audits.
The handbook can be used by new auditors to gain an understanding of auditing. Experienced auditors will find it to be a useful reference. Audit managers and quality managers can use the handbook as a guide for leading their auditing programs. The handbook may also be used by trainers and educators as source material for teaching the fundamentals of auditing.
So why is it so hard to make sound decisions? In Think Twice, now in paperback, Michael Mauboussin argues that we often fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls. Yet these cognitive errors are preventable.
In this engaging book, Mauboussin shows us how to recognize and avoid common mental missteps. These include misunderstanding cause-and-effect linkages, not considering enough alternative possibilities in making a decision, and relying too much on experts.
Through vivid stories, the author presents memorable rules for avoiding each error and explains how to recognize when you should “think twice”—questioning your reasoning and adopting decision-making strategies that are far more effective, even if they seem counterintuitive. Armed with this awareness, you'll soon begin making sounder judgment calls that benefit (rather than hurt) your organization.
This pocket guide is designed to be a quick, on-the-job reference for anyone interested in making their workplace more effective and efficient. It will provide a solid initial overview of what quality is and how it could impact you and your organization. Use it to compare how you and your organization are doing things, and to see whether whats described in the guide might be useful.
The tools of quality described herein are universal. People across the world need to find better, more effective ways to improve the creation and performance of products and services. Since organizational and process improvement is increasingly integrated into all areas of an organization, everyone must understand the basic principles of process control and process improvement. This succinct and concentrated guide can help.
Unlike any other pocket guide on the market, included throughout are direct links to numerous free online resources that not only go deeper but also to show these concepts and tools in action: case studies, articles, webcasts, templates, tutorials, examples from the ASQ Service Divisions Service Quality Body of Knowledge (SQBOK), and much more. This pocket guide serves as a gateway into the wealth of peerless content that ASQ offers.
But Hand is no believer in superstitions, prophecies, or the paranormal. His definition of "miracle" is thoroughly rational. No mystical or supernatural explanation is necessary to understand why someone is lucky enough to win the lottery twice, or is destined to be hit by lightning three times and still survive. All we need, Hand argues, is a firm grounding in a powerful set of laws: the laws of inevitability, of truly large numbers, of selection, of the probability lever, and of near enough.
Together, these constitute Hand's groundbreaking Improbability Principle. And together, they explain why we should not be so surprised to bump into a friend in a foreign country, or to come across the same unfamiliar word four times in one day. Hand wrestles with seemingly less explicable questions as well: what the Bible and Shakespeare have in common, why financial crashes are par for the course, and why lightning does strike the same place (and the same person) twice. Along the way, he teaches us how to use the Improbability Principle in our own lives—including how to cash in at a casino and how to recognize when a medicine is truly effective.
An irresistible adventure into the laws behind "chance" moments and a trusty guide for understanding the world and universe we live in, The Improbability Principle will transform how you think about serendipity and luck, whether it's in the world of business and finance or you're merely sitting in your backyard, tossing a ball into the air and wondering where it will land.
"A must-have book for anyone expecting to do research and/or applications in categorical data analysis."
—Statistics in Medicine
"It is a total delight reading this book."
"If you do any analysis of categorical data, this is an essential desktop reference."
The use of statistical methods for analyzing categorical data has increased dramatically, particularly in the biomedical, social sciences, and financial industries. Responding to new developments, this book offers a comprehensive treatment of the most important methods for categorical data analysis.
Categorical Data Analysis, Third Edition summarizes the latest methods for univariate and correlated multivariate categorical responses. Readers will find a unified generalized linear models approach that connects logistic regression and Poisson and negative binomial loglinear models for discrete data with normal regression for continuous data. This edition also features:An emphasis on logistic and probit regression methods for binary, ordinal, and nominal responses for independent observations and for clustered data with marginal models and random effects models Two new chapters on alternative methods for binary response data, including smoothing and regularization methods, classification methods such as linear discriminant analysis and classification trees, and cluster analysis New sections introducing the Bayesian approach for methods in that chapter More than 100 analyses of data sets and over 600 exercises Notes at the end of each chapter that provide references to recent research and topics not covered in the text, linked to a bibliography of more than 1,200 sources A supplementary website showing how to use R and SAS; for all examples in the text, with information also about SPSS and Stata and with exercise solutions
Categorical Data Analysis, Third Edition is an invaluable tool for statisticians and methodologists, such as biostatisticians and researchers in the social and behavioral sciences, medicine and public health, marketing, education, finance, biological and agricultural sciences, and industrial quality control.
Crunch Big Data to optimize marketing and more!
Overwhelmed by all the Big Data now available to you? Not sure what questions to ask or how to ask them? Using Microsoft Excel and proven decision analytics techniques, you can distill all that data into manageable sets—and use them to optimize a wide variety of business and investment decisions. In Decision Analytics: Microsoft Excel, best selling statistics expert and consultant Conrad Carlberg will show you how—hands-on and step-by-step.
Carlberg guides you through using decision analytics to segment customers (or anything else) into sensible and actionable groups and clusters. Next, you’ll learn practical ways to optimize a wide spectrum of decisions in business and beyond—from pricing to cross-selling, hiring to investments—even facial recognition software uses the techniques discussed in this book!
Through realistic examples, Carlberg helps you understand the techniques and assumptions that underlie decision analytics and use simple Excel charts to intuitively grasp the results. With this foundation in place, you can perform your own analyses in Excel and work with results produced by advanced stats packages such as SAS and SPSS.
This book comes with an extensive collection of downloadable Excel workbooks you can easily adapt to your own unique requirements, plus VBA code to streamline several of its most complex techniques.Classify data according to existing categories or naturally occurring clusters of predictor variables Cut massive numbers of variables and records down to size, so you can get the answers you really need Utilize cluster analysis to find patterns of similarity for market research and many other applications Learn how multiple discriminant analysis helps you classify cases Use MANOVA to decide whether groups differ on multivariate centroids Use principal components to explore data, find patterns, and identify latent factors
Register your book for access to all sample workbooks, updates, and corrections as they become available at quepublishing.com/title/9780789751683.
New to This Edition
*Updated throughout to incorporate important developments in latent variable modeling.
*Chapter on Bayesian CFA and multilevel measurement models.
*Addresses new topics (with examples): exploratory structural equation modeling, bifactor analysis, measurement invariance evaluation with categorical indicators, and a new method for scaling latent variables.
*Utilizes the latest versions of major latent variable software packages.
This insightful and eloquent book will show you how to measure those things in your own business, government agency or other organization that, until now, you may have considered "immeasurable," including customer satisfaction, organizational flexibility, technology risk, and technology ROI.Adds new measurement methods, showing how they can be applied to a variety of areas such as risk management and customer satisfaction Simplifies overall content while still making the more technical applications available to those readers who want to dig deeper Continues to boldly assert that any perception of "immeasurability" is based on certain popular misconceptions about measurement and measurement methods Shows the common reasoning for calling something immeasurable, and sets out to correct those ideas Offers practical methods for measuring a variety of "intangibles" Provides an online database (www.howtomeasureanything.com) of downloadable, practical examples worked out in detailed spreadsheets
Written by recognized expert Douglas Hubbard—creator of Applied Information Economics—How to Measure Anything, Third Edition illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.
Lawrence Weinstein and John Adam present an eclectic array of estimation problems that range from devilishly simple to quite sophisticated and from serious real-world concerns to downright silly ones. How long would it take a running faucet to fill the inverted dome of the Capitol? What is the total length of all the pickles consumed in the US in one year? What are the relative merits of internal-combustion and electric cars, of coal and nuclear energy? The problems are marvelously diverse, yet the skills to solve them are the same. The authors show how easy it is to derive useful ballpark estimates by breaking complex problems into simpler, more manageable ones--and how there can be many paths to the right answer. The book is written in a question-and-answer format with lots of hints along the way. It includes a handy appendix summarizing the few formulas and basic science concepts needed, and its small size and French-fold design make it conveniently portable. Illustrated with humorous pen-and-ink sketches, Guesstimation will delight popular-math enthusiasts and is ideal for the classroom.
New in the fourth edition of Latent Variable Models:
*a data CD that features the correlation and covariance matrices used in the exercises;
*new sections on missing data, non-normality, mediation, factorial invariance, and automating the construction of path diagrams; and
*reorganization of chapters 3-7 to enhance the flow of the book and its flexibility for teaching.
Intended for advanced students and researchers in the areas of social, educational, clinical, industrial, consumer, personality, and developmental psychology, sociology, political science, and marketing, some prior familiarity with correlation and regression is helpful.
- Covers all versions of Excel.
- Understand date and time serial numbers.
- Control how Excel interprets and formats dates and times.
- Resolve problems with two-digit years and negative times.
- Work around Excel's leap-year bug.
- Use the undocumented DATEDIF function.
- Generate series of dates and times.
- Convert imported text and numerical values to dates and times.
- Skip weekends and holidays in business and financial calculations.
- Find specific days of the month for holidays and paydays.
- Round times to the nearest hour, half-hour, minute, or any interval.
- Plenty of tips, tricks, and timesavers.
- Fully cross-referenced, linked, and searchable.
1. Getting Started with Dates & Times
2. Date & Time Basics
3. Date & Time Functions
4. Date Tricks
5. Time Tricks
Master modern web and network data modeling: both theory and applications.In Web and Network Data Science, a top faculty member of Northwestern University’s prestigious analytics program presents the first fully-integrated treatment of both the business and academic elements of web and network modeling for predictive analytics.
Some books in this field focus either entirely on business issues (e.g., Google Analytics and SEO); others are strictly academic (covering topics such as sociology, complexity theory, ecology, applied physics, and economics). This text gives today's managers and students what they really need: integrated coverage of concepts, principles, and theory in the context of real-world applications.
Building on his pioneering Web Analytics course at Northwestern University, Thomas W. Miller covers usability testing, Web site performance, usage analysis, social media platforms, search engine optimization (SEO), and many other topics. He balances this practical coverage with accessible and up-to-date introductions to both social network analysis and network science, demonstrating how these disciplines can be used to solve real business problems.
- Covers all versions of Excel.
- Display sums and counts without using formulas.
- Master the basics of COUNT, COUNTA, COUNTBLANK, and other counting functions.
- Create conditional counts with COUNTIF and COUNTIFS.
- Calculate the mode for numeric or text values.
- Count unique values in a range.
- Count occurrences of specific text strings.
- Create frequency distributions and histograms.
- Master the basics of the SUM function.
- Use AutoSum to sum values quickly.
- Calculate running totals.
- Sum only the highest or lowest values in a range.
- Eliminate rounding errors in financial calculations.
- Sum every Nth value in a range.
- Create conditional sums with SUMIF and SUMIFS.
- Plenty of tips, tricks, and timesavers.
- Fully cross-referenced, linked, and searchable.
1. Getting Started with Sums & Counts
2. Counting Basics
3. Counting Tricks
4. Frequency Distributions
5. Summing Basics
6. Summing Tricks
This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field of exploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategize and DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy.
The book utilizes Albert Einstein’s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimov was a better writer in spreading science than any textbook or journal author.
Updated throughout, the second edition features three new chapters—growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group issues (analyzing growth in multiple populations, accelerated designs, and multi-level longitudinal approaches), and then special topics such as missing data models, LGM power and Monte Carlo estimation, and latent growth interaction models. The model specifications previously included in the appendices are now available on the CD so the reader can more easily adapt the models to their own research.
This practical guide is ideal for a wide range of social and behavioral researchers interested in the measurement of change over time, including social, developmental, organizational, educational, consumer, personality and clinical psychologists, sociologists, and quantitative methodologists, as well as for a text on latent variable growth curve modeling or as a supplement for a course on multivariate statistics. A prerequisite of graduate level statistics is recommended.
“Represent[s] the full spectrum of the genre—from authoritative to playful.”—Scientific American
“Not only is it a thing of beauty, it’s also a good read, with thoughtful explanations of each winning graphic.”—Nature
“Information, in its raw form, can overwhelm us. Finding the visual form of data can simplify this deluge into pearls of understanding.” —Kim Rees, Periscopic
The most creative and effective data visualizations from the past year, edited by Brain Pickings creator Maria Popova
The rise of infographics across nearly all print and electronic media—from a graphic illuminating the tweets of the women of Isis to a memorable depiction of the national geography of beer—reveals patterns in our lives and the world in often startling ways. The Best American Infographics 2015 showcases visualizations from the worlds of politics, social issues, health, sports, arts and culture, and more. From an elegant graphic comparison of first sentences in classic novels to a startling illustration of the world’s deadliest animals, “You’ll come away with more than your share of . . . mind-bending moments—and a wide-ranging view of what infographics can do” (Harvard Business Review).
“This is what information design does at its best – it gives pause, makes visible the unsuspected yet significant invisibilia of life, and by astonishing us into mobilization, it catapults us toward one of the greatest feats of human courage: the act of changing one’s mind.”—from the Introduction by Maria Popova
Guest introducer MARIA POPOVA is the one-woman curation machine behind Brain Pickings, a cross-disciplinary blog showcasing content that makes people smarter. She has more than half a million monthly readers and over 480,000 Twitter followers. Popova is an MIT Futures of Entertainment Fellow and has written for the New York Times, Atlantic, Wired UK, GOOD Magazine, The Huffington Post, and the Nieman Journalism Lab.
Series editor GARETH COOK is a Pulitzer Prize–winning journalist, a contributor to the New York Times Magazine, and the editor of Mind Matters, Scientific American’s neuroscience blog. He helped invent the Boston Globe’s Sunday Ideas section and served as its editor from 2007 to 2011. His work has also appeared in NewYorker.com, WIRED, Scientific American, and The Best American Science and Nature Writing.
How to present charts and tables that viewers will grasp immediately: visual information anyone can use!
In an information-overloaded world, you simply must present information effectively. Using charts and tables, you can present categorical and numerical data far more clearly and efficiently. In this Element, we’ll show you exactly how to select and develop easy-to-understand charts and tables for the types of data you’re most likely to work with.
Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.
The quality inspector is the person perhaps most closely involved with day-to-day activities intended to ensure that products and services meet customer expectations. The quality inspector is required to understand and apply a variety of tools and techniques as codified in the American Society for Quality (ASQ) Certified Quality Inspector (CQI) Body of Knowledge (BoK). The tools and techniques identified in the ASQ CQI BoK include technical math, metrology, inspection and test techniques, and quality assurance. Quality inspectors frequently work with the quality function of organizations in the various measurement and inspection laboratories, as well as on the shop floor supporting and interacting with quality engineers and production/service delivery personnel.
This handbook supports individuals preparing to perform, or those already performing, this type of work. It is intended to serve as a ready reference for quality inspectors and quality inspectors in training, as well as a comprehensive reference for those individuals preparing to take the ASQ CQI examination. Examples and problems used throughout the handbook are thoroughly explained, are algebra-based, and are drawn from real-world situations encountered in the quality profession.
To assist readers in using this book as a ready reference or as a study aid, the book has been organized so as to conform explicitly to the ASQ CQI BoK. Each chapter title, all major topical divisions within the chapters, and every main point has been titled and then numbered exactly as they appear in the CQI BoK.
As the data deluge continues in today’s world, the need to master data mining, predictive analytics, and business analytics has never been greater. These techniques and tools provide unprecedented insights into data, enabling better decision making and forecasting, and ultimately the solution of increasingly complex problems.
Learn from the Creators of the RapidMiner Software
Written by leaders in the data mining community, including the developers of the RapidMiner software, RapidMiner: Data Mining Use Cases and Business Analytics Applications provides an in-depth introduction to the application of data mining and business analytics techniques and tools in scientific research, medicine, industry, commerce, and diverse other sectors. It presents the most powerful and flexible open source software solutions: RapidMiner and RapidAnalytics. The software and their extensions can be freely downloaded at www.RapidMiner.com.
Understand Each Stage of the Data Mining Process
The book and software tools cover all relevant steps of the data mining process, from data loading, transformation, integration, aggregation, and visualization to automated feature selection, automated parameter and process optimization, and integration with other tools, such as R packages or your IT infrastructure via web services. The book and software also extensively discuss the analysis of unstructured data, including text and image mining.
Easily Implement Analytics Approaches Using RapidMiner and RapidAnalytics
Each chapter describes an application, how to approach it with data mining methods, and how to implement it with RapidMiner and RapidAnalytics. These application-oriented chapters give you not only the necessary analytics to solve problems and tasks, but also reproducible, step-by-step descriptions of using RapidMiner and RapidAnalytics. The case studies serve as blueprints for your own data mining applications, enabling you to effectively solve similar problems.
Data Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions.
The book’s collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including:
Non-standard, complex data formats, such as robot logs and email messages Text processing and regular expressions Newer technologies, such as Web scraping, Web services, Keyhole Markup Language (KML), and Google Earth Statistical methods, such as classification trees, k-nearest neighbors, and naïve Bayes Visualization and exploratory data analysis Relational databases and Structured Query Language (SQL) Simulation Algorithm implementation Large data and efficiency
Suitable for self-study or as supplementary reading in a statistical computing course, the book enables instructors to incorporate interesting problems into their courses so that students gain valuable experience and data science skills. Students learn how to acquire and work with unstructured or semistructured data as well as how to narrow down and carefully frame the questions of interest about the data.
Blending computational details with statistical and data analysis concepts, this book provides readers with an understanding of how professional data scientists think about daily computational tasks. It will improve readers’ computational reasoning of real-world data analyses.
The aim of this book is to show how R can be used as the software tool in the development of Six Sigma projects. The book includes a gentle introduction to Six Sigma and a variety of examples showing how to use R within real situations. It has been conceived as a self contained piece. Therefore, it is addressed not only to Six Sigma practitioners, but also to professionals trying to initiate themselves in this management methodology. The book may be used as a text book as well.
Data Mining Mobile Devices defines the collection of machine-sensed environmental data pertaining to human social behavior. It explains how the integration of data mining and machine learning can enable the modeling of conversation context, proximity sensing, and geospatial location throughout large communities of mobile users. Examines the construction and leveraging of mobile sites Describes how to use mobile apps to gather key data about consumers’ behavior and preferences Discusses mobile mobs, which can be differentiated as distinct marketplaces—including Apple®, Google®, Facebook®, Amazon®, and Twitter® Provides detailed coverage of mobile analytics via clustering, text, and classification AI software and techniques
Mobile devices serve as detailed diaries of a person, continuously and intimately broadcasting where, how, when, and what products, services, and content your consumers desire. The future is mobile—data mining starts and stops in consumers' pockets.
Describing how to analyze Wi-Fi and GPS data from websites and apps, the book explains how to model mined data through the use of artificial intelligence software. It also discusses the monetization of mobile devices’ desires and preferences that can lead to the triangulated marketing of content, products, or services to billions of consumers—in a relevant, anonymous, and personal manner.
New York Times Bestseller
“Not so different in spirit from the way public intellectuals like John Kenneth Galbraith once shaped discussions of economic policy and public figures like Walter Cronkite helped sway opinion on the Vietnam War…could turn out to be one of the more momentous books of the decade.”
—New York Times Book Review
"Nate Silver's The Signal and the Noise is The Soul of a New Machine for the 21st century."
—Rachel Maddow, author of Drift
"A serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver's coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism."
—New York Review of Books
Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. He solidified his standing as the nation's foremost political forecaster with his near perfect prediction of the 2012 election. Silver is the founder and editor in chief of FiveThirtyEight.com.
Drawing on his own groundbreaking work, Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.
In keeping with his own aim to seek truth from data, Silver visits the most successful forecasters in a range of areas, from hurricanes to baseball, from the poker table to the stock market, from Capitol Hill to the NBA. He explains and evaluates how these forecasters think and what bonds they share. What lies behind their success? Are they good—or just lucky? What patterns have they unraveled? And are their forecasts really right? He explores unanticipated commonalities and exposes unexpected juxtapositions. And sometimes, it is not so much how good a prediction is in an absolute sense that matters but how good it is relative to the competition. In other cases, prediction is still a very rudimentary—and dangerous—science.
Silver observes that the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.
With everything from the health of the global economy to our ability to fight terrorism dependent on the quality of our predictions, Nate Silver’s insights are an essential read.
From the Trade Paperback edition.
-Illustrative examples using Mplus 7.4 include conceptual figures, Mplus program syntax, and an interpretation of results to show readers how to carry out the analyses with actual data.
-Exercises with an answer key allow readers to practice the skills they learn.
-Applications to a variety of disciplines appeal to those in the behavioral, social, political, educational, occupational, business, and health sciences.
-Data files for all the illustrative examples and exercises at www.routledge.com/9781138925151 allow readers to test their understanding of the concepts.
-Point to Rememberboxes aid in reader comprehension or provide in-depth discussions of key statistical or theoretical concepts.
Part 1 introduces basic structural equation modeling (SEM) as well as first- and second-order growth curve modeling. The book opens with the basic concepts from SEM, possible extensions of conventional growth curve models, and the data and measures used throughout the book. The subsequent chapters in part 1 explain the extensions. Chapter 2 introduces conventional modeling of multidimensional panel data, including confirmatory factor analysis (CFA) and growth curve modeling, and its limitations. The logical and theoretical extension of a CFA to a second-order growth curve, known as curve-of-factors model (CFM), are explained in Chapter 3. Chapter 4 illustrates the estimation and interpretation of unconditional and conditional CFMs. Chapter 5 presents the logical and theoretical extension of a parallel process model to a second-order growth curve, known as factor-of-curves model (FCM). Chapter 6 illustrates the estimation and interpretation of unconditional and conditional FCMs. Part 2 reviews growth mixture modeling including unconditional growth mixture modeling (Ch. 7) and conditional growth mixture models (Ch. 8). How to extend second-order growth curves (curve-of-factors and factor-of-curves models) to growth mixture models is highlighted in Chapter 9.
Ideal as a supplement for use in graduate courses on (advanced) structural equation, multilevel, longitudinal, or latent variable modeling, latent growth curve and mixture modeling, factor analysis, multivariate statistics, or advanced quantitative techniques (methods) taught in psychology, human development and family studies, business, education, health, and social sciences, this book’s practical approach also appeals to researchers. Prerequisites include a basic knowledge of intermediate statistics and structural equation modeling.
The book focuses on methods based on GLMs that have been found useful in actuarial practice and provides a set of tools for a tariff analysis. Basic theory of GLMs in a tariff analysis setting is presented with useful extensions of standarde GLM theory that are not in common use.
The book meets the European Core Syllabus for actuarial education and is written for actuarial students as well as practicing actuaries. To support reader real data of some complexity are provided at www.math.su.se/GLMbook.
This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book.
Master business modeling and analysis techniques with Microsoft Excel 2016, and transform data into bottom-line results. Written by award-winning educator Wayne Winston, this hands on, scenario-focused guide helps you use Excel’s newest tools to ask the right questions and get accurate, actionable answers. This edition adds 150+ new problems with solutions, plus a chapter of basic spreadsheet models to make sure you’re fully up to speed.
Solve real business problems with Excel–and build your competitive advantageQuickly transition from Excel basics to sophisticated analytics Summarize data by using PivotTables and Descriptive Statistics Use Excel trend curves, multiple regression, and exponential smoothing Master advanced functions such as OFFSET and INDIRECT Delve into key financial, statistical, and time functions Leverage the new charts in Excel 2016 (including box and whisker and waterfall charts) Make charts more effective by using Power View Tame complex optimizations by using Excel Solver Run Monte Carlo simulations on stock prices and bidding models Work with the AGGREGATE function and table slicers Create PivotTables from data in different worksheets or workbooks Learn about basic probability and Bayes’ Theorem Automate repetitive tasks by using macros
Bassetti, a client, friend, and student of John Magee, one of the original authors, has converted the material on the craft of manual charting with TEKNIPLAT chart paper to modern computer software methods. In actuality, none of Magee’s concepts have proven invalid and some of his work predated modern concepts such as beta and volatility. In addition, Magee described a trend-following procedure that is so simple and so elegant that Bassetti has adapted it to enable the general investor to use it to replace the cranky Dow Theory. This procedure, called the Basing Points procedure, is extensively described in the new Tenth Edition along with new material on powerful moving average systems and Leverage Space Portfolio Model generously contributed by the formidable analyst, Ralph Vince., author of Handbook of Portfolio Mathematics.
See what’s new in the Tenth Edition:
Chapters on replacing Dow Theory Update of Dow Theory Record Deletion of extraneous material on manual charting New chapters on Stops and Basing Points New material on moving average systems New material on Ralph Vince’s Leverage Space Portfolio Model
So much has changed since the first edition, yet so much has remained the same. Everyone wants to know how to play the game. The foundational work of the discipline of technical analysis, this book gives you more than a technical formula for trading and investing, it gives you the knowledge and wisdom to craft long-term success.
Updated throughout, this edition contains new chapters assessing the current options landscape, discussing margin collateral issues, and introducing Cohen’s exceptionally valuable OVI indicators.
The Bible of Options Strategies, Second Editionis practical from start to finish: modular, easy to navigate, and thoroughly cross-referenced, so you can find what you need fast, and act before your opportunity disappears. Cohen systematically covers every key area of options strategy: income strategies, volatility strategies, sideways market strategies, leveraged strategies, and synthetic strategies.
Even the most complex techniques are explained with unsurpassed clarity – making them accessible to any trader with even modest options experience. More than an incredible value, this is the definitive reference to contemporary options trading: the one book you need by your side whenever you trade. For all options traders with at least some experience.
Did you know that to make a task seem easier, all you have to do is lean back a little? Or that retail salespeople who mimic the way their customers speak and behave end up selling more?
If you like stats like this, are intrigued by ideas, and find connecting the dots to be a critical part of your skill set—this book is for you.
Culled from Harvard Business Review’s popular newsletter, The Daily Stat, this book offers a compelling look at insights that both amuse and inform. Covering such managerial topics as teams, marketing, workplace psychology, and leadership, you’ll find a wide range of business statistics and general curiosities and oddities about professional life that will add an element of trivia and humor to your learning (and will make you appear smarter than your colleagues).
Highly quotable and surprisingly useful, Stats and Curiosities: From Harvard Business Review will keep you on the front lines of business research—and ahead of the pack at work.
Operational Risk: Modeling Analytics is organized around the principle that the analysis of operational risk consists, in part, of the collection of data and the building of mathematical models to describe risk. This book is designed to provide risk analysts with a framework of the mathematical models and methods used in the measurement and modeling of operational risk in both the banking and insurance sectors.
Beginning with a foundation for operational risk modeling and a focus on the modeling process, the book flows logically to discussion of probabilistic tools for operational risk modeling and statistical methods for calibrating models of operational risk. Exercises are included in chapters involving numerical computations for students' practice and reinforcement of concepts.
Written by Harry Panjer, one of the foremost authorities in the world on risk modeling and its effects in business management, this is the first comprehensive book dedicated to the quantitative assessment of operational risk using the tools of probability, statistics, and actuarial science.
In addition to providing great detail of the many probabilistic and statistical methods used in operational risk, this book features:
* Ample exercises to further elucidate the concepts in the text
* Definitive coverage of distribution functions and related concepts
* Models for the size of losses
* Models for frequency of loss
* Aggregate loss modeling
* Extreme value modeling
* Dependency modeling using copulas
* Statistical methods in model selection and calibration
Assuming no previous expertise in either operational risk terminology or in mathematical statistics, the text is designed for beginning graduate-level courses on risk and operational management or enterprise risk management. This book is also useful as a reference for practitioners in both enterprise risk management and risk and operational management.
Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.
Each chapter begins with an overview of key material reviewed in previous chapters, concludes with a list of suggested readings, and features boxes with examples that connect theory to practice. These examples reflect actual situations that occurred in psychology, education, and other disciplines in the US and around the globe, bringing theory to life. Critical thinking questions related to the boxed material engage and challenge readers. A few examples include:
What is the difference between intelligence and IQ?
Can people disagree on issues of value but agree on issues of test validity?
Is it possible to ask the same question in two different languages?
The first part of the book contrasts theories of measurement as applied to the validity of behavioral science measures.The next part considers causal theories of measurement in relation to alternatives such as behavior domain sampling, and then unpacks the causal approach in terms of alternative theories of causation.The final section explores the meaning and interpretation of test scores as it applies to test validity. Each set of chapters opens with a review of the key theories and literature and concludes with a review of related open questions in test validity theory.?
Researchers, practitioners and policy makers interested in test validity or developing tests appreciate the book's cutting edge review of test validity. The book also serves as a supplement in graduate or advanced undergraduate courses on test validity, psychometrics, testing or measurement taught in psychology, education, sociology, social work, political science, business, criminal justice and other fields. The book does not assume a background in measurement.
After introducing the concepts of probability, random variables, and probability density functions, the author develops the key concepts of mathematical statistics, most notably: expectation, sampling, asymptotics, and the main families of distributions. The latter half of the book is then devoted to the theories of estimation and hypothesis testing with associated examples and problems that indicate their wide applicability in economics and business. Features of the new edition include: a reorganization of topic flow and presentation to facilitate reading and understanding; inclusion of additional topics of relevance to statistics and econometric applications; a more streamlined and simple-to-understand notation for multiple integration and multiple summation over general sets or vector arguments; updated examples; new end-of-chapter problems; a solution manual for students; a comprehensive answer manual for instructors; and a theorem and definition map.
This book has evolved from numerous graduate courses in mathematical statistics and econometrics taught by the author, and will be ideal for students beginning graduate study as well as for advanced undergraduates.