For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.
And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.
This textbook provides a comprehensive introduction to forecasting methods and presents enough information about each method for readers to use them sensibly.
“[Taleb is] Wall Street’s principal dissident. . . . [Fooled By Randomness] is to conventional Wall Street wisdom approximately what Martin Luther’s ninety-nine theses were to the Catholic Church.”
–Malcolm Gladwell, The New Yorker
Finally in paperback, the word-of-mouth sensation that will change the way you think about the markets and the world.This book is about luck: more precisely how we perceive luck in our personal and professional experiences.
Set against the backdrop of the most conspicuous forum in which luck is mistaken for skill–the world of business–Fooled by Randomness is an irreverent, iconoclastic, eye-opening, and endlessly entertaining exploration of one of the least understood forces in all of our lives.
From the Trade Paperback edition.
Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates
Complexity surrounds us. We have too much email, juggle multiple remotes, and hack through thickets of regulations from phone contracts to health plans. But complexity isn’t destiny. Sull and Eisenhardt argue there’s a better way. By developing a few simple yet effective rules, people can best even the most complex problems.
In Simple Rules, Sull and Eisenhardt masterfully challenge how we think about complexity and offer a new lens on how to cope. They take us on a surprising tour of what simple rules are, where they come from, and why they work. The authors illustrate the six kinds o f rules that really matter - for helping artists find creativity and the Federal Reserve set interest rates, for keeping birds on track and Zipcar members organized, and for how insomniacs can sleep and mountain climbers stay safe.
Drawing on rigorous research and riveting stories, the authors ingeniously find insights in unexpected places, from the way Tina Fey codified her experience at Saturday Night Live into rules for producing 30 Rock (rule five: never tell a crazy person he’s crazy) to burglars’ rules for robbery (“avoid houses with a car parked outside”) to Japanese engineers mimicking the rules of slime molds to optimize Tokyo’s rail system. The authors offer fresh information and practical tips on fixing old rules and learning new ones.
Whether you’re struggling with information overload, pursuing opportunities with limited resources, or just trying to change your bad habits, Simple Rules provides powerful insight into how and why simplicity tames complexity.
This book shows you how to validate your initial idea, find the right customers, decide what to build, how to monetize your business, and how to spread the word. Packed with more than thirty case studies and insights from over a hundred business experts, Lean Analytics provides you with hard-won, real-world information no entrepreneur can afford to go without.Understand Lean Startup, analytics fundamentals, and the data-driven mindsetLook at six sample business models and how they map to new ventures of all sizesFind the One Metric That Matters to youLearn how to draw a line in the sand, so you’ll know it’s time to move forwardApply Lean Analytics principles to large enterprises and established products
In the late 1980s, Japanese scientists were trying to figure out the economic damage that would be caused if a catastrophic earthquake destroyed Tokyo. The answer was bleak, but not for Japan. Kaoru Oda, an economist who worked for Tokai Bank, speculated that the United States would end up paying the most. Why? Japan owned trillions of dollars’ worth of foreign liquid assets and investments. These assets, which the world depended on, would be sold, forcing countries into the precarious position of having to return large amounts of money they might not have. After the recent earthquake, Michael Lewis reexamined this hypothesis and came to a surprising conclusion. With his characteristic sense of humor and wit, Lewis, once again, explains the inner workings of a financial catastrophe.
“How a Tokyo Earthquake Could Devastate Wall Street” appears in Michael Lewis’s book The Money Culture.
New York Times Bestseller
A former Wall Street quant sounds an alarm on the mathematical models that pervade modern life — and threaten to rip apart our social fabric
We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.
But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.
Tracing the arc of a person’s life, O’Neil exposes the black box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.
O’Neil calls on modelers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
— Longlist for National Book Award (Non-Fiction)
— Goodreads, semi-finalist for the 2016 Goodreads Choice Awards (Science and Technology)
— Kirkus, Best Books of 2016
— New York Times, 100 Notable Books of 2016 (Non-Fiction)
— The Guardian, Best Books of 2016
— WBUR's "On Point," Best Books of 2016: Staff Picks
— Boston Globe, Best Books of 2016, Non-Fiction
“The leading indicators” shape our lives intimately, but few of us know where these numbers come from, what they mean, or why they rule the world. GDP, inflation, unemployment, trade, and a host of averages determine whether we feel optimistic or pessimistic about the country’s future and our own. They dictate whether businesses hire and invest, or fire and hunker down, whether governments spend trillions or try to reduce debt, whether individuals marry, buy a car, get a mortgage, or look for a job.
Zachary Karabell tackles the history and the limitations of each of our leading indicators. The solution is not to invent new indicators, but to become less dependent on a few simple figures and tap into the data revolution. We have unparalleled power to find the information we need, but only if we let go of the outdated indicators that lead and mislead us.
New to the fourth edition are the topics of common and special causes, outliers, and risk management tools. Besides the new topics, many current topics have been expanded to reflect changes in auditing practices since 2004 and ISO 19011 guidance, and they have been rewritten to promote the common elements of all types of system and process audits.
The handbook can be used by new auditors to gain an understanding of auditing. Experienced auditors will find it to be a useful reference. Audit managers and quality managers can use the handbook as a guide for leading their auditing programs. The handbook may also be used by trainers and educators as source material for teaching the fundamentals of auditing.
It used to be that to diagnose an illness, interpret legal documents, analyze foreign policy, or write a newspaper article you needed a human being with specific skills—and maybe an advanced degree or two. These days, high-level tasks are increasingly being handled by algorithms that can do precise work not only with speed but also with nuance. These “bots” started with human programming and logic, but now their reach extends beyond what their creators ever expected.
In this fascinating, frightening book, Christopher Steiner tells the story of how algorithms took over—and shows why the “bot revolution” is about to spill into every aspect of our lives, often silently, without our knowledge.
The May 2010 “Flash Crash” exposed Wall Street’s reliance on trading bots to the tune of a 998-point market drop and $1 trillion in vanished market value. But that was just the beginning. In Automate This, we meet bots that are driving cars, penning haiku, and writing music mistaken for Bach’s. They listen in on our customer service calls and figure out what Iran would do in the event of a nuclear standoff. There are algorithms that can pick out the most cohesive crew of astronauts for a space mission or identify the next Jeremy Lin. Some can even ingest statistics from baseball games and spit out pitch-perfect sports journalism indistinguishable from that produced by humans.
The interaction of man and machine can make our lives easier. But what will the world look like when algorithms control our hospitals, our roads, our culture, and our national security? What happens to businesses when we automate judgment and eliminate human instinct? And what role will be left for doctors, lawyers, writers, truck drivers, and many others?Who knows—maybe there’s a bot learning to do your job this minute.
So why is it so hard to make sound decisions? In Think Twice, now in paperback, Michael Mauboussin argues that we often fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls. Yet these cognitive errors are preventable.
In this engaging book, Mauboussin shows us how to recognize and avoid common mental missteps. These include misunderstanding cause-and-effect linkages, not considering enough alternative possibilities in making a decision, and relying too much on experts.
Through vivid stories, the author presents memorable rules for avoiding each error and explains how to recognize when you should “think twice”—questioning your reasoning and adopting decision-making strategies that are far more effective, even if they seem counterintuitive. Armed with this awareness, you'll soon begin making sounder judgment calls that benefit (rather than hurt) your organization.
How can you use Excel and Power BI to gain real insights into your information? As you examine your data, how do you write a formula that provides the numbers you need? The answers to both of these questions lie with the data model. This book introduces the basic techniques for shaping data models in Excel and Power BI. It’s meant for readers who are new to data modeling as well as for experienced data modelers looking for tips from the experts. If you want to use Power BI or Excel to analyze data, the many real-world examples in this book will help you look at your reports in a different way–like experienced data modelers do. As you’ll soon see, with the right data model, the correct answer is always a simple one!
By reading this book, you will:
• Gain an understanding of the basics of data modeling, including tables, relationships, and keys
• Familiarize yourself with star schemas, snowflakes, and common modeling techniques
• Learn the importance of granularity
• Discover how to use multiple fact tables, like sales and purchases, in a complex data model
• Manage calendar-related calculations by using date tables
• Track historical attributes, like previous addresses of customers or manager assignments
• Use snapshots to compute quantity on hand
• Work with multiple currencies in the most efficient way
• Analyze events that have durations, including overlapping durations
• Learn what data model you need to answer your specific business questions
About This Book
• For Excel and Power BI users who want to exploit the full power of their favorite tools
• For BI professionals seeking new ideas for modeling data
Accompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at www.thenewstatistics.com. The book’s exercises use ESCI's simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice.
The book’s pedagogical program, built on cognitive science principles, reinforces learning:
Boxes provide "evidence-based" advice on the most effective statistical techniques. Numerous examples reinforce learning, and show that many disciplines are using the new statistics. Graphs are tied in with ESCI to make important concepts vividly clear and memorable. Opening overviews and end of chapter take-home messages summarize key points. Exercises encourage exploration, deep understanding, and practical applications.
This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development , nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics is assumed.
Hate math? No sweat. You’ll be amazed at how little you need. Like math? Optional "Equation Blackboard" sections reveal the mathematical foundations of statistics right before your eyes. If you need to understand, evaluate, or use statistics in business, academia, or anywhere else, this is the book you've been searching for!
But Hand is no believer in superstitions, prophecies, or the paranormal. His definition of "miracle" is thoroughly rational. No mystical or supernatural explanation is necessary to understand why someone is lucky enough to win the lottery twice, or is destined to be hit by lightning three times and still survive. All we need, Hand argues, is a firm grounding in a powerful set of laws: the laws of inevitability, of truly large numbers, of selection, of the probability lever, and of near enough.
Together, these constitute Hand's groundbreaking Improbability Principle. And together, they explain why we should not be so surprised to bump into a friend in a foreign country, or to come across the same unfamiliar word four times in one day. Hand wrestles with seemingly less explicable questions as well: what the Bible and Shakespeare have in common, why financial crashes are par for the course, and why lightning does strike the same place (and the same person) twice. Along the way, he teaches us how to use the Improbability Principle in our own lives—including how to cash in at a casino and how to recognize when a medicine is truly effective.
An irresistible adventure into the laws behind "chance" moments and a trusty guide for understanding the world and universe we live in, The Improbability Principle will transform how you think about serendipity and luck, whether it's in the world of business and finance or you're merely sitting in your backyard, tossing a ball into the air and wondering where it will land.
Crunch Big Data to optimize marketing and more!
Overwhelmed by all the Big Data now available to you? Not sure what questions to ask or how to ask them? Using Microsoft Excel and proven decision analytics techniques, you can distill all that data into manageable sets—and use them to optimize a wide variety of business and investment decisions. In Decision Analytics: Microsoft Excel, best selling statistics expert and consultant Conrad Carlberg will show you how—hands-on and step-by-step.
Carlberg guides you through using decision analytics to segment customers (or anything else) into sensible and actionable groups and clusters. Next, you’ll learn practical ways to optimize a wide spectrum of decisions in business and beyond—from pricing to cross-selling, hiring to investments—even facial recognition software uses the techniques discussed in this book!
Through realistic examples, Carlberg helps you understand the techniques and assumptions that underlie decision analytics and use simple Excel charts to intuitively grasp the results. With this foundation in place, you can perform your own analyses in Excel and work with results produced by advanced stats packages such as SAS and SPSS.
This book comes with an extensive collection of downloadable Excel workbooks you can easily adapt to your own unique requirements, plus VBA code to streamline several of its most complex techniques.Classify data according to existing categories or naturally occurring clusters of predictor variables Cut massive numbers of variables and records down to size, so you can get the answers you really need Utilize cluster analysis to find patterns of similarity for market research and many other applications Learn how multiple discriminant analysis helps you classify cases Use MANOVA to decide whether groups differ on multivariate centroids Use principal components to explore data, find patterns, and identify latent factors
Register your book for access to all sample workbooks, updates, and corrections as they become available at quepublishing.com/title/9780789751683.
Lawrence Weinstein and John Adam present an eclectic array of estimation problems that range from devilishly simple to quite sophisticated and from serious real-world concerns to downright silly ones. How long would it take a running faucet to fill the inverted dome of the Capitol? What is the total length of all the pickles consumed in the US in one year? What are the relative merits of internal-combustion and electric cars, of coal and nuclear energy? The problems are marvelously diverse, yet the skills to solve them are the same. The authors show how easy it is to derive useful ballpark estimates by breaking complex problems into simpler, more manageable ones--and how there can be many paths to the right answer. The book is written in a question-and-answer format with lots of hints along the way. It includes a handy appendix summarizing the few formulas and basic science concepts needed, and its small size and French-fold design make it conveniently portable. Illustrated with humorous pen-and-ink sketches, Guesstimation will delight popular-math enthusiasts and is ideal for the classroom.
New to This Edition
*Updated throughout to incorporate important developments in latent variable modeling.
*Chapter on Bayesian CFA and multilevel measurement models.
*Addresses new topics (with examples): exploratory structural equation modeling, bifactor analysis, measurement invariance evaluation with categorical indicators, and a new method for scaling latent variables.
*Utilizes the latest versions of major latent variable software packages.
- Covers all versions of Excel.
- Understand date and time serial numbers.
- Control how Excel interprets and formats dates and times.
- Resolve problems with two-digit years and negative times.
- Work around Excel's leap-year bug.
- Use the undocumented DATEDIF function.
- Generate series of dates and times.
- Convert imported text and numerical values to dates and times.
- Skip weekends and holidays in business and financial calculations.
- Find specific days of the month for holidays and paydays.
- Round times to the nearest hour, half-hour, minute, or any interval.
- Plenty of tips, tricks, and timesavers.
- Fully cross-referenced, linked, and searchable.
1. Getting Started with Dates & Times
2. Date & Time Basics
3. Date & Time Functions
4. Date Tricks
5. Time Tricks
The sixth edition is no exception. It provides an accessible, comprehensive introduction to the theory and practice of time series analysis. The treatment covers a wide range of topics, including ARIMA probability models, forecasting methods, spectral analysis, linear systems, state-space models, and the Kalman filter. It also addresses nonlinear, multivariate, and long-memory models. The author has carefully updated each chapter, added new discussions, incorporated new datasets, and made those datasets available for download from www.crcpress.com. A free online appendix on time series analysis using R can be accessed at http://people.bath.ac.uk/mascc/TSA.usingR.doc.
Highlights of the Sixth Edition:A new section on handling real data New discussion on prediction intervals A completely revised and restructured chapter on more advanced topics, with new material on the aggregation of time series, analyzing time series in finance, and discrete-valued time series A new chapter of examples and practical advice Thorough updates and revisions throughout the text that reflect recent developments and dramatic changes in computing practices over the last few years
The analysis of time series can be a difficult topic, but as this book has demonstrated for two-and-a-half decades, it does not have to be daunting. The accessibility, polished presentation, and broad coverage of The Analysis of Time Series make it simply the best introduction to the subject available.
New in the fourth edition of Latent Variable Models:
*a data CD that features the correlation and covariance matrices used in the exercises;
*new sections on missing data, non-normality, mediation, factorial invariance, and automating the construction of path diagrams; and
*reorganization of chapters 3-7 to enhance the flow of the book and its flexibility for teaching.
Intended for advanced students and researchers in the areas of social, educational, clinical, industrial, consumer, personality, and developmental psychology, sociology, political science, and marketing, some prior familiarity with correlation and regression is helpful.
Learn everything you need to know to start using business analytics and integrating it throughout your organization.Business Analytics Principles, Concepts, and Applications brings together a complete, integrated package of knowledge for newcomers to the subject. The authors present an up-to-date view of what business analytics is, why it is so valuable, and most importantly, how it is used. They combine essential conceptual content with clear explanations of the tools, techniques, and methodologies actually used to implement modern business analytics initiatives.
They offer a proven step-wise approach to designing an analytics program, and successfully integrating it into your organization, so it effectively provides intelligence for competitive advantage in decision making.
Using step-by-step examples, the authors identify common challenges that can be addressed by business analytics, illustrate each type of analytics (descriptive, prescriptive, and predictive), and guide users in undertaking their own projects. Illustrating the real-world use of statistical, information systems, and management science methodologies, these examples help readers successfully apply the methods they are learning.
Unlike most competitive guides, this text demonstrates the use of IBM's menu-based SPSS software, permitting instructors to spend less time teaching software and more time focusing on business analytics itself.
A valuable resource for all beginning-to-intermediate-level business analysts and business analytics managers; for MBA/Masters' degree students in the field; and for advanced undergraduates majoring in statistics, applied mathematics, or engineering/operations research.
- Covers all versions of Excel.
- Display sums and counts without using formulas.
- Master the basics of COUNT, COUNTA, COUNTBLANK, and other counting functions.
- Create conditional counts with COUNTIF and COUNTIFS.
- Calculate the mode for numeric or text values.
- Count unique values in a range.
- Count occurrences of specific text strings.
- Create frequency distributions and histograms.
- Master the basics of the SUM function.
- Use AutoSum to sum values quickly.
- Calculate running totals.
- Sum only the highest or lowest values in a range.
- Eliminate rounding errors in financial calculations.
- Sum every Nth value in a range.
- Create conditional sums with SUMIF and SUMIFS.
- Plenty of tips, tricks, and timesavers.
- Fully cross-referenced, linked, and searchable.
1. Getting Started with Sums & Counts
2. Counting Basics
3. Counting Tricks
4. Frequency Distributions
5. Summing Basics
6. Summing Tricks
Master modern web and network data modeling: both theory and applications.In Web and Network Data Science, a top faculty member of Northwestern University’s prestigious analytics program presents the first fully-integrated treatment of both the business and academic elements of web and network modeling for predictive analytics.
Some books in this field focus either entirely on business issues (e.g., Google Analytics and SEO); others are strictly academic (covering topics such as sociology, complexity theory, ecology, applied physics, and economics). This text gives today's managers and students what they really need: integrated coverage of concepts, principles, and theory in the context of real-world applications.
Building on his pioneering Web Analytics course at Northwestern University, Thomas W. Miller covers usability testing, Web site performance, usage analysis, social media platforms, search engine optimization (SEO), and many other topics. He balances this practical coverage with accessible and up-to-date introductions to both social network analysis and network science, demonstrating how these disciplines can be used to solve real business problems.
This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field of exploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategize and DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy.
The book utilizes Albert Einstein’s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimov was a better writer in spreading science than any textbook or journal author.
Updated throughout, the second edition features three new chapters—growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group issues (analyzing growth in multiple populations, accelerated designs, and multi-level longitudinal approaches), and then special topics such as missing data models, LGM power and Monte Carlo estimation, and latent growth interaction models. The model specifications previously included in the appendices are now available on the CD so the reader can more easily adapt the models to their own research.
This practical guide is ideal for a wide range of social and behavioral researchers interested in the measurement of change over time, including social, developmental, organizational, educational, consumer, personality and clinical psychologists, sociologists, and quantitative methodologists, as well as for a text on latent variable growth curve modeling or as a supplement for a course on multivariate statistics. A prerequisite of graduate level statistics is recommended.
“Represent[s] the full spectrum of the genre—from authoritative to playful.”—Scientific American
“Not only is it a thing of beauty, it’s also a good read, with thoughtful explanations of each winning graphic.”—Nature
“Information, in its raw form, can overwhelm us. Finding the visual form of data can simplify this deluge into pearls of understanding.” —Kim Rees, Periscopic
The most creative and effective data visualizations from the past year, edited by Brain Pickings creator Maria Popova
The rise of infographics across nearly all print and electronic media—from a graphic illuminating the tweets of the women of Isis to a memorable depiction of the national geography of beer—reveals patterns in our lives and the world in often startling ways. The Best American Infographics 2015 showcases visualizations from the worlds of politics, social issues, health, sports, arts and culture, and more. From an elegant graphic comparison of first sentences in classic novels to a startling illustration of the world’s deadliest animals, “You’ll come away with more than your share of . . . mind-bending moments—and a wide-ranging view of what infographics can do” (Harvard Business Review).
“This is what information design does at its best – it gives pause, makes visible the unsuspected yet significant invisibilia of life, and by astonishing us into mobilization, it catapults us toward one of the greatest feats of human courage: the act of changing one’s mind.”—from the Introduction by Maria Popova
Guest introducer MARIA POPOVA is the one-woman curation machine behind Brain Pickings, a cross-disciplinary blog showcasing content that makes people smarter. She has more than half a million monthly readers and over 480,000 Twitter followers. Popova is an MIT Futures of Entertainment Fellow and has written for the New York Times, Atlantic, Wired UK, GOOD Magazine, The Huffington Post, and the Nieman Journalism Lab.
Series editor GARETH COOK is a Pulitzer Prize–winning journalist, a contributor to the New York Times Magazine, and the editor of Mind Matters, Scientific American’s neuroscience blog. He helped invent the Boston Globe’s Sunday Ideas section and served as its editor from 2007 to 2011. His work has also appeared in NewYorker.com, WIRED, Scientific American, and The Best American Science and Nature Writing.
How to present charts and tables that viewers will grasp immediately: visual information anyone can use!
In an information-overloaded world, you simply must present information effectively. Using charts and tables, you can present categorical and numerical data far more clearly and efficiently. In this Element, we’ll show you exactly how to select and develop easy-to-understand charts and tables for the types of data you’re most likely to work with.
This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book.
Master business modeling and analysis techniques with Microsoft Excel 2016, and transform data into bottom-line results. Written by award-winning educator Wayne Winston, this hands on, scenario-focused guide helps you use Excel’s newest tools to ask the right questions and get accurate, actionable answers. This edition adds 150+ new problems with solutions, plus a chapter of basic spreadsheet models to make sure you’re fully up to speed.
Solve real business problems with Excel–and build your competitive advantageQuickly transition from Excel basics to sophisticated analytics Summarize data by using PivotTables and Descriptive Statistics Use Excel trend curves, multiple regression, and exponential smoothing Master advanced functions such as OFFSET and INDIRECT Delve into key financial, statistical, and time functions Leverage the new charts in Excel 2016 (including box and whisker and waterfall charts) Make charts more effective by using Power View Tame complex optimizations by using Excel Solver Run Monte Carlo simulations on stock prices and bidding models Work with the AGGREGATE function and table slicers Create PivotTables from data in different worksheets or workbooks Learn about basic probability and Bayes’ Theorem Automate repetitive tasks by using macros
Did you know that to make a task seem easier, all you have to do is lean back a little? Or that retail salespeople who mimic the way their customers speak and behave end up selling more?
If you like stats like this, are intrigued by ideas, and find connecting the dots to be a critical part of your skill set—this book is for you.
Culled from Harvard Business Review’s popular newsletter, The Daily Stat, this book offers a compelling look at insights that both amuse and inform. Covering such managerial topics as teams, marketing, workplace psychology, and leadership, you’ll find a wide range of business statistics and general curiosities and oddities about professional life that will add an element of trivia and humor to your learning (and will make you appear smarter than your colleagues).
Highly quotable and surprisingly useful, Stats and Curiosities: From Harvard Business Review will keep you on the front lines of business research—and ahead of the pack at work.
The quality inspector is the person perhaps most closely involved with day-to-day activities intended to ensure that products and services meet customer expectations. The quality inspector is required to understand and apply a variety of tools and techniques as codified in the American Society for Quality (ASQ) Certified Quality Inspector (CQI) Body of Knowledge (BoK). The tools and techniques identified in the ASQ CQI BoK include technical math, metrology, inspection and test techniques, and quality assurance. Quality inspectors frequently work with the quality function of organizations in the various measurement and inspection laboratories, as well as on the shop floor supporting and interacting with quality engineers and production/service delivery personnel.
This handbook supports individuals preparing to perform, or those already performing, this type of work. It is intended to serve as a ready reference for quality inspectors and quality inspectors in training, as well as a comprehensive reference for those individuals preparing to take the ASQ CQI examination. Examples and problems used throughout the handbook are thoroughly explained, are algebra-based, and are drawn from real-world situations encountered in the quality profession.
To assist readers in using this book as a ready reference or as a study aid, the book has been organized so as to conform explicitly to the ASQ CQI BoK. Each chapter title, all major topical divisions within the chapters, and every main point has been titled and then numbered exactly as they appear in the CQI BoK.
There is so much buzz around big data. We all need to know what it is and how it works - that much is obvious. But is a basic understanding of the theory enough to hold your own in strategy meetings? Probably. But what will set you apart from the rest is actually knowing how to USE big data to get solid, real-world business results - and putting that in place to improve performance. Big Data will give you a clear understanding, blueprint, and step-by-step approach to building your own big data strategy. This is a well-needed practical introduction to actually putting the topic into practice. Illustrated with numerous real-world examples from a cross section of companies and organisations, Big Data will take you through the five steps of the SMART model: Start with Strategy, Measure Metrics and Data, Apply Analytics, Report Results, Transform.Discusses how companies need to clearly define what it is they need to know Outlines how companies can collect relevant data and measure the metrics that will help them answer their most important business questions Addresses how the results of big data analytics can be visualised and communicated to ensure key decisions-makers understand them Includes many high-profile case studies from the author's work with some of the world's best known brands
As the data deluge continues in today’s world, the need to master data mining, predictive analytics, and business analytics has never been greater. These techniques and tools provide unprecedented insights into data, enabling better decision making and forecasting, and ultimately the solution of increasingly complex problems.
Learn from the Creators of the RapidMiner Software
Written by leaders in the data mining community, including the developers of the RapidMiner software, RapidMiner: Data Mining Use Cases and Business Analytics Applications provides an in-depth introduction to the application of data mining and business analytics techniques and tools in scientific research, medicine, industry, commerce, and diverse other sectors. It presents the most powerful and flexible open source software solutions: RapidMiner and RapidAnalytics. The software and their extensions can be freely downloaded at www.RapidMiner.com.
Understand Each Stage of the Data Mining Process
The book and software tools cover all relevant steps of the data mining process, from data loading, transformation, integration, aggregation, and visualization to automated feature selection, automated parameter and process optimization, and integration with other tools, such as R packages or your IT infrastructure via web services. The book and software also extensively discuss the analysis of unstructured data, including text and image mining.
Easily Implement Analytics Approaches Using RapidMiner and RapidAnalytics
Each chapter describes an application, how to approach it with data mining methods, and how to implement it with RapidMiner and RapidAnalytics. These application-oriented chapters give you not only the necessary analytics to solve problems and tasks, but also reproducible, step-by-step descriptions of using RapidMiner and RapidAnalytics. The case studies serve as blueprints for your own data mining applications, enabling you to effectively solve similar problems.
Data Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions.
The book’s collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including:
Non-standard, complex data formats, such as robot logs and email messages Text processing and regular expressions Newer technologies, such as Web scraping, Web services, Keyhole Markup Language (KML), and Google Earth Statistical methods, such as classification trees, k-nearest neighbors, and naïve Bayes Visualization and exploratory data analysis Relational databases and Structured Query Language (SQL) Simulation Algorithm implementation Large data and efficiency
Suitable for self-study or as supplementary reading in a statistical computing course, the book enables instructors to incorporate interesting problems into their courses so that students gain valuable experience and data science skills. Students learn how to acquire and work with unstructured or semistructured data as well as how to narrow down and carefully frame the questions of interest about the data.
Blending computational details with statistical and data analysis concepts, this book provides readers with an understanding of how professional data scientists think about daily computational tasks. It will improve readers’ computational reasoning of real-world data analyses.
The aim of this book is to show how R can be used as the software tool in the development of Six Sigma projects. The book includes a gentle introduction to Six Sigma and a variety of examples showing how to use R within real situations. It has been conceived as a self contained piece. Therefore, it is addressed not only to Six Sigma practitioners, but also to professionals trying to initiate themselves in this management methodology. The book may be used as a text book as well.
Data Mining Mobile Devices defines the collection of machine-sensed environmental data pertaining to human social behavior. It explains how the integration of data mining and machine learning can enable the modeling of conversation context, proximity sensing, and geospatial location throughout large communities of mobile users. Examines the construction and leveraging of mobile sites Describes how to use mobile apps to gather key data about consumers’ behavior and preferences Discusses mobile mobs, which can be differentiated as distinct marketplaces—including Apple®, Google®, Facebook®, Amazon®, and Twitter® Provides detailed coverage of mobile analytics via clustering, text, and classification AI software and techniques
Mobile devices serve as detailed diaries of a person, continuously and intimately broadcasting where, how, when, and what products, services, and content your consumers desire. The future is mobile—data mining starts and stops in consumers' pockets.
Describing how to analyze Wi-Fi and GPS data from websites and apps, the book explains how to model mined data through the use of artificial intelligence software. It also discusses the monetization of mobile devices’ desires and preferences that can lead to the triangulated marketing of content, products, or services to billions of consumers—in a relevant, anonymous, and personal manner.
*Includes worked-through, substantive examples, using large-scale educational and social science databases, such as PISA (Program for International Student Assessment) and the LSAY (Longitudinal Study of American Youth).
*Utilizes open-source R software programs available on CRAN (such as MCMCpack and rjags); readers do not have to master the R language and can easily adapt the example programs to fit individual needs.
*Shows readers how to carefully warrant priors on the basis of empirical data.
*Companion website features data and code for the book's examples, plus other resources.
Bassetti, a client, friend, and student of John Magee, one of the original authors, has converted the material on the craft of manual charting with TEKNIPLAT chart paper to modern computer software methods. In actuality, none of Magee’s concepts have proven invalid and some of his work predated modern concepts such as beta and volatility. In addition, Magee described a trend-following procedure that is so simple and so elegant that Bassetti has adapted it to enable the general investor to use it to replace the cranky Dow Theory. This procedure, called the Basing Points procedure, is extensively described in the new Tenth Edition along with new material on powerful moving average systems and Leverage Space Portfolio Model generously contributed by the formidable analyst, Ralph Vince., author of Handbook of Portfolio Mathematics.
See what’s new in the Tenth Edition:
Chapters on replacing Dow Theory Update of Dow Theory Record Deletion of extraneous material on manual charting New chapters on Stops and Basing Points New material on moving average systems New material on Ralph Vince’s Leverage Space Portfolio Model
So much has changed since the first edition, yet so much has remained the same. Everyone wants to know how to play the game. The foundational work of the discipline of technical analysis, this book gives you more than a technical formula for trading and investing, it gives you the knowledge and wisdom to craft long-term success.
Each chapter begins with an overview of key material reviewed in previous chapters, concludes with a list of suggested readings, and features boxes with examples that connect theory to practice. These examples reflect actual situations that occurred in psychology, education, and other disciplines in the US and around the globe, bringing theory to life. Critical thinking questions related to the boxed material engage and challenge readers. A few examples include:
What is the difference between intelligence and IQ?
Can people disagree on issues of value but agree on issues of test validity?
Is it possible to ask the same question in two different languages?
The first part of the book contrasts theories of measurement as applied to the validity of behavioral science measures.The next part considers causal theories of measurement in relation to alternatives such as behavior domain sampling, and then unpacks the causal approach in terms of alternative theories of causation.The final section explores the meaning and interpretation of test scores as it applies to test validity. Each set of chapters opens with a review of the key theories and literature and concludes with a review of related open questions in test validity theory.?
Researchers, practitioners and policy makers interested in test validity or developing tests appreciate the book's cutting edge review of test validity. The book also serves as a supplement in graduate or advanced undergraduate courses on test validity, psychometrics, testing or measurement taught in psychology, education, sociology, social work, political science, business, criminal justice and other fields. The book does not assume a background in measurement.
-Illustrative examples using Mplus 7.4 include conceptual figures, Mplus program syntax, and an interpretation of results to show readers how to carry out the analyses with actual data.
-Exercises with an answer key allow readers to practice the skills they learn.
-Applications to a variety of disciplines appeal to those in the behavioral, social, political, educational, occupational, business, and health sciences.
-Data files for all the illustrative examples and exercises at www.routledge.com/9781138925151 allow readers to test their understanding of the concepts.
-Point to Rememberboxes aid in reader comprehension or provide in-depth discussions of key statistical or theoretical concepts.
Part 1 introduces basic structural equation modeling (SEM) as well as first- and second-order growth curve modeling. The book opens with the basic concepts from SEM, possible extensions of conventional growth curve models, and the data and measures used throughout the book. The subsequent chapters in part 1 explain the extensions. Chapter 2 introduces conventional modeling of multidimensional panel data, including confirmatory factor analysis (CFA) and growth curve modeling, and its limitations. The logical and theoretical extension of a CFA to a second-order growth curve, known as curve-of-factors model (CFM), are explained in Chapter 3. Chapter 4 illustrates the estimation and interpretation of unconditional and conditional CFMs. Chapter 5 presents the logical and theoretical extension of a parallel process model to a second-order growth curve, known as factor-of-curves model (FCM). Chapter 6 illustrates the estimation and interpretation of unconditional and conditional FCMs. Part 2 reviews growth mixture modeling including unconditional growth mixture modeling (Ch. 7) and conditional growth mixture models (Ch. 8). How to extend second-order growth curves (curve-of-factors and factor-of-curves models) to growth mixture models is highlighted in Chapter 9.
Ideal as a supplement for use in graduate courses on (advanced) structural equation, multilevel, longitudinal, or latent variable modeling, latent growth curve and mixture modeling, factor analysis, multivariate statistics, or advanced quantitative techniques (methods) taught in psychology, human development and family studies, business, education, health, and social sciences, this book’s practical approach also appeals to researchers. Prerequisites include a basic knowledge of intermediate statistics and structural equation modeling.
The book focuses on methods based on GLMs that have been found useful in actuarial practice and provides a set of tools for a tariff analysis. Basic theory of GLMs in a tariff analysis setting is presented with useful extensions of standarde GLM theory that are not in common use.
The book meets the European Core Syllabus for actuarial education and is written for actuarial students as well as practicing actuaries. To support reader real data of some complexity are provided at www.math.su.se/GLMbook.
Updated throughout, this edition contains new chapters assessing the current options landscape, discussing margin collateral issues, and introducing Cohen’s exceptionally valuable OVI indicators.
The Bible of Options Strategies, Second Editionis practical from start to finish: modular, easy to navigate, and thoroughly cross-referenced, so you can find what you need fast, and act before your opportunity disappears. Cohen systematically covers every key area of options strategy: income strategies, volatility strategies, sideways market strategies, leveraged strategies, and synthetic strategies.
Even the most complex techniques are explained with unsurpassed clarity – making them accessible to any trader with even modest options experience. More than an incredible value, this is the definitive reference to contemporary options trading: the one book you need by your side whenever you trade. For all options traders with at least some experience.
Operational Risk: Modeling Analytics is organized around the principle that the analysis of operational risk consists, in part, of the collection of data and the building of mathematical models to describe risk. This book is designed to provide risk analysts with a framework of the mathematical models and methods used in the measurement and modeling of operational risk in both the banking and insurance sectors.
Beginning with a foundation for operational risk modeling and a focus on the modeling process, the book flows logically to discussion of probabilistic tools for operational risk modeling and statistical methods for calibrating models of operational risk. Exercises are included in chapters involving numerical computations for students' practice and reinforcement of concepts.
Written by Harry Panjer, one of the foremost authorities in the world on risk modeling and its effects in business management, this is the first comprehensive book dedicated to the quantitative assessment of operational risk using the tools of probability, statistics, and actuarial science.
In addition to providing great detail of the many probabilistic and statistical methods used in operational risk, this book features:
* Ample exercises to further elucidate the concepts in the text
* Definitive coverage of distribution functions and related concepts
* Models for the size of losses
* Models for frequency of loss
* Aggregate loss modeling
* Extreme value modeling
* Dependency modeling using copulas
* Statistical methods in model selection and calibration
Assuming no previous expertise in either operational risk terminology or in mathematical statistics, the text is designed for beginning graduate-level courses on risk and operational management or enterprise risk management. This book is also useful as a reference for practitioners in both enterprise risk management and risk and operational management.
Visualization Analysis and Design provides a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form.
The book breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It walks readers through the use of space and color to visually encode data in a view, the trade-offs between changing a single view and using multiple linked views, and the ways to reduce the amount of data shown in each view. The book concludes with six case studies analyzed in detail with the full framework.
The book is suitable for a broad set of readers, from beginners to more experienced visualization designers. It does not assume any previous experience in programming, mathematics, human–computer interaction, or graphic design and can be used in an introductory visualization course at the graduate or undergraduate level.
—Devdatt Dubhashi, Professor, Department of Computer Science and Engineering, Chalmers University, Sweden
"This textbook manages to be easier to read than other comparable books in the subject while retaining all the rigorous treatment needed. The new chapters put it at the forefront of the field by covering topics that have become mainstream in machine learning over the last decade."
—Daniel Barbara, George Mason University, Fairfax, Virginia, USA
"The new edition of A First Course in Machine Learning by Rogers and Girolami is an excellent introduction to the use of statistical methods in machine learning. The book introduces concepts such as mathematical modeling, inference, and prediction, providing ‘just in time’ the essential background on linear algebra, calculus, and probability theory that the reader needs to understand these concepts."
—Daniel Ortiz-Arroyo, Associate Professor, Aalborg University Esbjerg, Denmark
"I was impressed by how closely the material aligns with the needs of an introductory course on machine learning, which is its greatest strength...Overall, this is a pragmatic and helpful book, which is well-aligned to the needs of an introductory course and one that I will be looking at for my own students in coming months."
—David Clifton, University of Oxford, UK
"The first edition of this book was already an excellent introductory text on machine learning for an advanced undergraduate or taught masters level course, or indeed for anybody who wants to learn about an interesting and important field of computer science. The additional chapters of advanced material on Gaussian process, MCMC and mixture modeling provide an ideal basis for practical projects, without disturbing the very clear and readable exposition of the basics contained in the first part of the book."
—Gavin Cawley, Senior Lecturer, School of Computing Sciences, University of East Anglia, UK
"This book could be used for junior/senior undergraduate students or first-year graduate students, as well as individuals who want to explore the field of machine learning...The book introduces not only the concepts but the underlying ideas on algorithm implementation from a critical thinking perspective."
—Guangzhi Qu, Oakland University, Rochester, Michigan, USA