For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.
And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.
“[Taleb is] Wall Street’s principal dissident. . . . [Fooled By Randomness] is to conventional Wall Street wisdom approximately what Martin Luther’s ninety-nine theses were to the Catholic Church.”
–Malcolm Gladwell, The New Yorker
Finally in paperback, the word-of-mouth sensation that will change the way you think about the markets and the world.This book is about luck: more precisely how we perceive luck in our personal and professional experiences.
Set against the backdrop of the most conspicuous forum in which luck is mistaken for skill–the world of business–Fooled by Randomness is an irreverent, iconoclastic, eye-opening, and endlessly entertaining exploration of one of the least understood forces in all of our lives.
From the Trade Paperback edition.
This textbook provides a comprehensive introduction to forecasting methods and presents enough information about each method for readers to use them sensibly.
Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates
Complexity surrounds us. We have too much email, juggle multiple remotes, and hack through thickets of regulations from phone contracts to health plans. But complexity isn’t destiny. Sull and Eisenhardt argue there’s a better way. By developing a few simple yet effective rules, people can best even the most complex problems.
In Simple Rules, Sull and Eisenhardt masterfully challenge how we think about complexity and offer a new lens on how to cope. They take us on a surprising tour of what simple rules are, where they come from, and why they work. The authors illustrate the six kinds o f rules that really matter - for helping artists find creativity and the Federal Reserve set interest rates, for keeping birds on track and Zipcar members organized, and for how insomniacs can sleep and mountain climbers stay safe.
Drawing on rigorous research and riveting stories, the authors ingeniously find insights in unexpected places, from the way Tina Fey codified her experience at Saturday Night Live into rules for producing 30 Rock (rule five: never tell a crazy person he’s crazy) to burglars’ rules for robbery (“avoid houses with a car parked outside”) to Japanese engineers mimicking the rules of slime molds to optimize Tokyo’s rail system. The authors offer fresh information and practical tips on fixing old rules and learning new ones.
Whether you’re struggling with information overload, pursuing opportunities with limited resources, or just trying to change your bad habits, Simple Rules provides powerful insight into how and why simplicity tames complexity.
This book shows you how to validate your initial idea, find the right customers, decide what to build, how to monetize your business, and how to spread the word. Packed with more than thirty case studies and insights from over a hundred business experts, Lean Analytics provides you with hard-won, real-world information no entrepreneur can afford to go without.Understand Lean Startup, analytics fundamentals, and the data-driven mindsetLook at six sample business models and how they map to new ventures of all sizesFind the One Metric That Matters to youLearn how to draw a line in the sand, so you’ll know it’s time to move forwardApply Lean Analytics principles to large enterprises and established products
In the late 1980s, Japanese scientists were trying to figure out the economic damage that would be caused if a catastrophic earthquake destroyed Tokyo. The answer was bleak, but not for Japan. Kaoru Oda, an economist who worked for Tokai Bank, speculated that the United States would end up paying the most. Why? Japan owned trillions of dollars’ worth of foreign liquid assets and investments. These assets, which the world depended on, would be sold, forcing countries into the precarious position of having to return large amounts of money they might not have. After the recent earthquake, Michael Lewis reexamined this hypothesis and came to a surprising conclusion. With his characteristic sense of humor and wit, Lewis, once again, explains the inner workings of a financial catastrophe.
“How a Tokyo Earthquake Could Devastate Wall Street” appears in Michael Lewis’s book The Money Culture.
“The leading indicators” shape our lives intimately, but few of us know where these numbers come from, what they mean, or why they rule the world. GDP, inflation, unemployment, trade, and a host of averages determine whether we feel optimistic or pessimistic about the country’s future and our own. They dictate whether businesses hire and invest, or fire and hunker down, whether governments spend trillions or try to reduce debt, whether individuals marry, buy a car, get a mortgage, or look for a job.
Zachary Karabell tackles the history and the limitations of each of our leading indicators. The solution is not to invent new indicators, but to become less dependent on a few simple figures and tap into the data revolution. We have unparalleled power to find the information we need, but only if we let go of the outdated indicators that lead and mislead us.
New to the fourth edition are the topics of common and special causes, outliers, and risk management tools. Besides the new topics, many current topics have been expanded to reflect changes in auditing practices since 2004 and ISO 19011 guidance, and they have been rewritten to promote the common elements of all types of system and process audits.
The handbook can be used by new auditors to gain an understanding of auditing. Experienced auditors will find it to be a useful reference. Audit managers and quality managers can use the handbook as a guide for leading their auditing programs. The handbook may also be used by trainers and educators as source material for teaching the fundamentals of auditing.
The Failure of Risk Management takes a close look at misused and misapplied basic analysis methods and shows how some of the most popular "risk management" methods are no better than astrology! Using examples from the 2008 credit crisis, natural disasters, outsourcing to China, engineering disasters, and more, Hubbard reveals critical flaws in risk management methods–and shows how all of these problems can be fixed. The solutions involve combinations of scientifically proven and frequently used methods from nuclear power, exploratory oil, and other areas of business and government. Finally, Hubbard explains how new forms of collaboration across all industries and government can improve risk management in every field.
Douglas W. Hubbard (Glen Ellyn, IL) is the inventor of Applied Information Economics (AIE) and the author of Wiley's How to Measure Anything: Finding the Value of Intangibles in Business (978-0-470-11012-6), the #1 bestseller in business math on Amazon. He has applied innovative risk assessment and risk management methods in government and corporations since 1994.
"Doug Hubbard, a recognized expert among experts in the field of risk management, covers the entire spectrum of risk management in this invaluable guide. There are specific value-added take aways in each chapter that are sure to enrich all readers including IT, business management, students, and academics alike"
—Peter Julian, former chief-information officer of the New York Metro Transit Authority. President of Alliance Group consulting
"In his trademark style, Doug asks the tough questions on risk management. A must-read not only for analysts, but also for the executive who is making critical business decisions."
—Jim Franklin, VP Enterprise Performance Management and General Manager, Crystal Ball Global Business Unit, Oracle Corporation.
New to This Edition
*Extensively revised to cover important new topics: Pearl's graphing theory and the SCM, causal inference frameworks, conditional process modeling, path models for longitudinal data, item response theory, and more.
*Chapters on best practices in all stages of SEM, measurement invariance in confirmatory factor analysis, and significance testing issues and bootstrapping.
*Expanded coverage of psychometrics.
*Additional computer tools: online files for all detailed examples, previously provided in EQS, LISREL, and Mplus, are now also given in Amos, Stata, and R (lavaan).
*Reorganized to cover the specification, identification, and analysis of observed variable models separately from latent variable models.
*Exercises with answers, plus end-of-chapter annotated lists of further reading.
*Real examples of troublesome data, demonstrating how to handle typical problems in analyses.
*Topic boxes on specialized issues, such as causes of nonpositive definite correlations.
*Boxed rules to remember.
*Website promoting a learn-by-doing approach, including syntax and data files for six widely used SEM computer tools.
So why is it so hard to make sound decisions? In Think Twice, now in paperback, Michael Mauboussin argues that we often fall victim to simplified mental routines that prevent us from coping with the complex realities inherent in important judgment calls. Yet these cognitive errors are preventable.
In this engaging book, Mauboussin shows us how to recognize and avoid common mental missteps. These include misunderstanding cause-and-effect linkages, not considering enough alternative possibilities in making a decision, and relying too much on experts.
Through vivid stories, the author presents memorable rules for avoiding each error and explains how to recognize when you should “think twice”—questioning your reasoning and adopting decision-making strategies that are far more effective, even if they seem counterintuitive. Armed with this awareness, you'll soon begin making sounder judgment calls that benefit (rather than hurt) your organization.
Commercializing technology has never been easy, and it's getting tougher all the time. All the decisions you need to make are complicated by today's breakneck rates of change in enabling technology and by competitive pressures disseminated globally at the speed of the internet: Where to get ideas? Which to pursue? Whom to hire? Where to manufacture? How to fund? Create a startup or license to another? To answer these questions adequately and bring sophisticated products and services successfully to market, you need to deploy the systematic methods detailed in this book.
Jerry Schaufeld--serial technology entrepreneur, angel investor, and distinguished professor of entrepreneurship--presents in detail his proven step-by-step commercialization process, beginning with technology assessment and culminating with the successful launch of viable products into the global market. Using case studies, models, and practical tips culled from his entrepreneurial career, he shows readers of Commercializing Innovation how toSource technology that can be turned into products
Recognize an opportunity to create a viable product
Perform feasibility analyses before sinking too much money into a project
Find the right method and means to introduce the product to market
Plan the project down to the last detail
Execute the project in ways that improve chances of its success
Comply with government regulation without crippling your project
Decide whether offshore manufacturing is your best option
Compete globally with globally sourced ideas and funding
Crunch Big Data to optimize marketing and more!
Overwhelmed by all the Big Data now available to you? Not sure what questions to ask or how to ask them? Using Microsoft Excel and proven decision analytics techniques, you can distill all that data into manageable sets—and use them to optimize a wide variety of business and investment decisions. In Decision Analytics: Microsoft Excel, best selling statistics expert and consultant Conrad Carlberg will show you how—hands-on and step-by-step.
Carlberg guides you through using decision analytics to segment customers (or anything else) into sensible and actionable groups and clusters. Next, you’ll learn practical ways to optimize a wide spectrum of decisions in business and beyond—from pricing to cross-selling, hiring to investments—even facial recognition software uses the techniques discussed in this book!
Through realistic examples, Carlberg helps you understand the techniques and assumptions that underlie decision analytics and use simple Excel charts to intuitively grasp the results. With this foundation in place, you can perform your own analyses in Excel and work with results produced by advanced stats packages such as SAS and SPSS.
This book comes with an extensive collection of downloadable Excel workbooks you can easily adapt to your own unique requirements, plus VBA code to streamline several of its most complex techniques.Classify data according to existing categories or naturally occurring clusters of predictor variables Cut massive numbers of variables and records down to size, so you can get the answers you really need Utilize cluster analysis to find patterns of similarity for market research and many other applications Learn how multiple discriminant analysis helps you classify cases Use MANOVA to decide whether groups differ on multivariate centroids Use principal components to explore data, find patterns, and identify latent factors
Register your book for access to all sample workbooks, updates, and corrections as they become available at quepublishing.com/title/9780789751683.
But Hand is no believer in superstitions, prophecies, or the paranormal. His definition of "miracle" is thoroughly rational. No mystical or supernatural explanation is necessary to understand why someone is lucky enough to win the lottery twice, or is destined to be hit by lightning three times and still survive. All we need, Hand argues, is a firm grounding in a powerful set of laws: the laws of inevitability, of truly large numbers, of selection, of the probability lever, and of near enough.
Together, these constitute Hand's groundbreaking Improbability Principle. And together, they explain why we should not be so surprised to bump into a friend in a foreign country, or to come across the same unfamiliar word four times in one day. Hand wrestles with seemingly less explicable questions as well: what the Bible and Shakespeare have in common, why financial crashes are par for the course, and why lightning does strike the same place (and the same person) twice. Along the way, he teaches us how to use the Improbability Principle in our own lives—including how to cash in at a casino and how to recognize when a medicine is truly effective.
An irresistible adventure into the laws behind "chance" moments and a trusty guide for understanding the world and universe we live in, The Improbability Principle will transform how you think about serendipity and luck, whether it's in the world of business and finance or you're merely sitting in your backyard, tossing a ball into the air and wondering where it will land.
New to This Edition
*Updated throughout to incorporate important developments in latent variable modeling.
*Chapter on Bayesian CFA and multilevel measurement models.
*Addresses new topics (with examples): exploratory structural equation modeling, bifactor analysis, measurement invariance evaluation with categorical indicators, and a new method for scaling latent variables.
*Utilizes the latest versions of major latent variable software packages.
Lawrence Weinstein and John Adam present an eclectic array of estimation problems that range from devilishly simple to quite sophisticated and from serious real-world concerns to downright silly ones. How long would it take a running faucet to fill the inverted dome of the Capitol? What is the total length of all the pickles consumed in the US in one year? What are the relative merits of internal-combustion and electric cars, of coal and nuclear energy? The problems are marvelously diverse, yet the skills to solve them are the same. The authors show how easy it is to derive useful ballpark estimates by breaking complex problems into simpler, more manageable ones--and how there can be many paths to the right answer. The book is written in a question-and-answer format with lots of hints along the way. It includes a handy appendix summarizing the few formulas and basic science concepts needed, and its small size and French-fold design make it conveniently portable. Illustrated with humorous pen-and-ink sketches, Guesstimation will delight popular-math enthusiasts and is ideal for the classroom.
New in the fourth edition of Latent Variable Models:
*a data CD that features the correlation and covariance matrices used in the exercises;
*new sections on missing data, non-normality, mediation, factorial invariance, and automating the construction of path diagrams; and
*reorganization of chapters 3-7 to enhance the flow of the book and its flexibility for teaching.
Intended for advanced students and researchers in the areas of social, educational, clinical, industrial, consumer, personality, and developmental psychology, sociology, political science, and marketing, some prior familiarity with correlation and regression is helpful.
- Covers all versions of Excel.- Understand date and time serial numbers.- Control how Excel interprets and formats dates and times.- Resolve problems with two-digit years and negative times.- Work around Excel's leap-year bug.- Use the undocumented DATEDIF function.- Generate series of dates and times.- Convert imported text and numerical values to dates and times.- Skip weekends and holidays in business and financial calculations.- Find specific days of the month for holidays and paydays.- Round times to the nearest hour, half-hour, minute, or any interval.- Plenty of tips, tricks, and timesavers.- Fully cross-referenced, linked, and searchable.
Contents1. Getting Started with Dates & Times2. Date & Time Basics3. Date & Time Functions4. Date Tricks5. Time Tricks
- Covers all versions of Excel.- Display sums and counts without using formulas.- Master the basics of COUNT, COUNTA, COUNTBLANK, and other counting functions.- Create conditional counts with COUNTIF and COUNTIFS.- Calculate the mode for numeric or text values.- Count unique values in a range.- Count occurrences of specific text strings.- Create frequency distributions and histograms.- Master the basics of the SUM function.- Use AutoSum to sum values quickly.- Calculate running totals.- Sum only the highest or lowest values in a range.- Eliminate rounding errors in financial calculations.- Sum every Nth value in a range.- Create conditional sums with SUMIF and SUMIFS.- Plenty of tips, tricks, and timesavers.- Fully cross-referenced, linked, and searchable.
Contents1. Getting Started with Sums & Counts2. Counting Basics3. Counting Tricks4. Frequency Distributions5. Summing Basics6. Summing Tricks
This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field of exploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategize and DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy.
The book utilizes Albert Einstein’s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimov was a better writer in spreading science than any textbook or journal author.
The increased interest in dynamic pricing models stems from their applicability to practical situations: with the freeing of exchange, interest rates, and capital controls, the market for derivative products has matured and pricing models have become more accurate. This updated edition has six new chapters and chapter-concluding exercises, plus one thoroughly expanded chapter. The text answers the need for a resource targeting professionals, Ph.D. students, and advanced MBA students who are specifically interested in financial derivatives.
This edition is also designed to become the main text in first year masters and Ph.D. programs for certain courses, and will continue to be an important manual for market professionals and professionals with mathematical, technical, or physics backgrounds.
How to present charts and tables that viewers will grasp immediately: visual information anyone can use!
In an information-overloaded world, you simply must present information effectively. Using charts and tables, you can present categorical and numerical data far more clearly and efficiently. In this Element, we’ll show you exactly how to select and develop easy-to-understand charts and tables for the types of data you’re most likely to work with.
Master modern web and network data modeling: both theory and applications.In Web and Network Data Science, a top faculty member of Northwestern University’s prestigious analytics program presents the first fully-integrated treatment of both the business and academic elements of web and network modeling for predictive analytics.
Some books in this field focus either entirely on business issues (e.g., Google Analytics and SEO); others are strictly academic (covering topics such as sociology, complexity theory, ecology, applied physics, and economics). This text gives today's managers and students what they really need: integrated coverage of concepts, principles, and theory in the context of real-world applications.
Building on his pioneering Web Analytics course at Northwestern University, Thomas W. Miller covers usability testing, Web site performance, usage analysis, social media platforms, search engine optimization (SEO), and many other topics. He balances this practical coverage with accessible and up-to-date introductions to both social network analysis and network science, demonstrating how these disciplines can be used to solve real business problems.
Cyber security is not just a technical subject that can be resolved like any other IT-related problem—it is a ‘risk’ that can be mitigated by creating awareness and getting the right combination of technology and practices based on careful analysis. This book combines insights on cybersecurity from academic research, media reports, vendor reports, practical consultation and research experience.
The first section of the book discusses motivation and types of cybercrimes that can take place. The second lists the major types of threats that users might encounter. The third discusses the impact, trend and role of the government in combating cybercrime. The fourth section of the book tells the readers about ways to protect themselves and secure their data/information stored in computers and the cyberspace. It concludes by offering suggestions for building a secure cyber environment.
For easy reading, a comprehensive list of hundreds of topics each with a graphic image and explanatory text act as a useful exam revision reminder or reference tool for professionals.
The accompanying software which brings all these images to life can be downloaded at no extra charge thereby providing an additional computer based interactive learning resource as an easy and enjoyable way to study.
A combined eBook and software package at a tiny fraction of the previously published price.
Unlock accompanying software with your eBook receipt!
This insightful and eloquent book will show you how to measure those things in your own business, government agency or other organization that, until now, you may have considered "immeasurable," including customer satisfaction, organizational flexibility, technology risk, and technology ROI.Adds new measurement methods, showing how they can be applied to a variety of areas such as risk management and customer satisfaction Simplifies overall content while still making the more technical applications available to those readers who want to dig deeper Continues to boldly assert that any perception of "immeasurability" is based on certain popular misconceptions about measurement and measurement methods Shows the common reasoning for calling something immeasurable, and sets out to correct those ideas Offers practical methods for measuring a variety of "intangibles" Provides an online database (www.howtomeasureanything.com) of downloadable, practical examples worked out in detailed spreadsheets
Written by recognized expert Douglas Hubbard—creator of Applied Information Economics—How to Measure Anything, Third Edition illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.
Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.
“Represent[s] the full spectrum of the genre—from authoritative to playful.”—Scientific American
“Not only is it a thing of beauty, it’s also a good read, with thoughtful explanations of each winning graphic.”—Nature
“Information, in its raw form, can overwhelm us. Finding the visual form of data can simplify this deluge into pearls of understanding.” —Kim Rees, Periscopic
The most creative and effective data visualizations from the past year, edited by Brain Pickings creator Maria Popova
The rise of infographics across nearly all print and electronic media—from a graphic illuminating the tweets of the women of Isis to a memorable depiction of the national geography of beer—reveals patterns in our lives and the world in often startling ways. The Best American Infographics 2015 showcases visualizations from the worlds of politics, social issues, health, sports, arts and culture, and more. From an elegant graphic comparison of first sentences in classic novels to a startling illustration of the world’s deadliest animals, “You’ll come away with more than your share of . . . mind-bending moments—and a wide-ranging view of what infographics can do” (Harvard Business Review).
“This is what information design does at its best – it gives pause, makes visible the unsuspected yet significant invisibilia of life, and by astonishing us into mobilization, it catapults us toward one of the greatest feats of human courage: the act of changing one’s mind.”—from the Introduction by Maria Popova
Guest introducer MARIA POPOVA is the one-woman curation machine behind Brain Pickings, a cross-disciplinary blog showcasing content that makes people smarter. She has more than half a million monthly readers and over 480,000 Twitter followers. Popova is an MIT Futures of Entertainment Fellow and has written for the New York Times, Atlantic, Wired UK, GOOD Magazine, The Huffington Post, and the Nieman Journalism Lab.
Series editor GARETH COOK is a Pulitzer Prize–winning journalist, a contributor to the New York Times Magazine, and the editor of Mind Matters, Scientific American’s neuroscience blog. He helped invent the Boston Globe’s Sunday Ideas section and served as its editor from 2007 to 2011. His work has also appeared in NewYorker.com, WIRED, Scientific American, and The Best American Science and Nature Writing.
Building an Enterprise Architecture Practice provides practical advice on how to develop your enterprise architecture practice. The authors developed different tools and models to support organizations in implementing and professionalizing an enterprise architecture function. The application of these tools and models in many different organizations forms the basis for this book. The result is a hands-on book that will help you to avoid certain pitfalls and achieve success with enterprise architecture.
A lot of organizations nowadays have a team of enterprise architects at work but struggle with questions like:
• How do I show the added value of enterprise architecture?
• How do I determine what specific architectures are necessary for my organization?
• What steps do I need to take to improve my enterprise architecture practice?
• How do I fulfill the role of enterprise architect?
These questions are answered in this book and illustrated with a lot of best practices.
After reading the book the reader will have a better understanding of what makes enterprise architecture successful and will possess the tools to analyse his own situation and build an enterprise architecture practice accordingly.
This book clearly describes how to establish an architecture practice that delivers value for an organization. The authors demonstrate a wealth of experience and a deep understanding of the multifaceted nature of this challenging task and they provide sound advice on how to avoid the many pitfalls that may be encountered along the way.
Recognising that there is no 'one-size-fits-all' approach, they show how to deploy a range of practical tools and approaches that will enable each organization to create its own road map to success. In particular, their Maturity Matrix is invaluable for balancing architecture priorities and targeting improvements. The book makes a significant contribution to the professionalization of the architect role.
Sally Bean, Enterprise Architecture Consultant
Too many books on enterprise architecture leave one in a state of mental fuzziness: After reading them, the reader has learned a lot of impressive words but still does not know how to design an enterprise architecture. This step by step guide to DYA is different.
It provides pragmatic guidelines for developing enterprise architecture and presents a maturity model that helps the users of DYA to state realistic goals and to outline feasible steps to achieve these goals. Particularly useful is the emphasis on a coherent enterprise architecture vision, including the value added by the architecture. I warmly recommend this book to practicing enterprise architects.
Prof. Dr. Roel Wieringa, Universiteit Twente
Updated throughout, the second edition features three new chapters—growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group issues (analyzing growth in multiple populations, accelerated designs, and multi-level longitudinal approaches), and then special topics such as missing data models, LGM power and Monte Carlo estimation, and latent growth interaction models. The model specifications previously included in the appendices are now available on the CD so the reader can more easily adapt the models to their own research.
This practical guide is ideal for a wide range of social and behavioral researchers interested in the measurement of change over time, including social, developmental, organizational, educational, consumer, personality and clinical psychologists, sociologists, and quantitative methodologists, as well as for a text on latent variable growth curve modeling or as a supplement for a course on multivariate statistics. A prerequisite of graduate level statistics is recommended.
The quality inspector is the person perhaps most closely involved with day-to-day activities intended to ensure that products and services meet customer expectations. The quality inspector is required to understand and apply a variety of tools and techniques as codified in the American Society for Quality (ASQ) Certified Quality Inspector (CQI) Body of Knowledge (BoK). The tools and techniques identified in the ASQ CQI BoK include technical math, metrology, inspection and test techniques, and quality assurance. Quality inspectors frequently work with the quality function of organizations in the various measurement and inspection laboratories, as well as on the shop floor supporting and interacting with quality engineers and production/service delivery personnel.
This handbook supports individuals preparing to perform, or those already performing, this type of work. It is intended to serve as a ready reference for quality inspectors and quality inspectors in training, as well as a comprehensive reference for those individuals preparing to take the ASQ CQI examination. Examples and problems used throughout the handbook are thoroughly explained, are algebra-based, and are drawn from real-world situations encountered in the quality profession.
To assist readers in using this book as a ready reference or as a study aid, the book has been organized so as to conform explicitly to the ASQ CQI BoK. Each chapter title, all major topical divisions within the chapters, and every main point has been titled and then numbered exactly as they appear in the CQI BoK.
Whether you're an aspiring manager, a current manager, or just wondering what the heck a manager does all day, there is a story in this book that will speak to you—and help you survive and prosper amid the general craziness of dysfunctional bright people caught up in the chase of riches and power. Scattered in repose among these manic misfits are managers, an even stranger breed of people who, through a mystical organizational ritual, have been given power over the futures and the bank accounts of many others.
Lopp's straight-from-the-hip style is unlike that of any other writer on management and leadership. He pulls no punches and tells stories he probably shouldn't. But they are magically instructive and yield Lopp’s trenchant insights on leadership that cut to the heart of the matter—whether it's dealing with your boss, handling a slacker, hiring top guns, or seeing a knotty project through to completion.
Writing code is easy. Managing humans is not. You need a book to help you do it, and this is it.What You'll Learn
Manage your boss
Discover how to say no
Understand different engineering personalities
Build effective teamsRun a meeting wellScale teams“/div>divWho This Book Is Fordiv
Managers and would-be managers staring at the role of a manager wondering why they would ever leave the safe world of bits and bytes for the messy world of managing humans. The book covers handling conflict, managing wildly differing personality types, infusing innovation into insane product schedules, and figuring out how to build a lasting and useful engineering culture.
As the data deluge continues in today’s world, the need to master data mining, predictive analytics, and business analytics has never been greater. These techniques and tools provide unprecedented insights into data, enabling better decision making and forecasting, and ultimately the solution of increasingly complex problems.
Learn from the Creators of the RapidMiner Software
Written by leaders in the data mining community, including the developers of the RapidMiner software, RapidMiner: Data Mining Use Cases and Business Analytics Applications provides an in-depth introduction to the application of data mining and business analytics techniques and tools in scientific research, medicine, industry, commerce, and diverse other sectors. It presents the most powerful and flexible open source software solutions: RapidMiner and RapidAnalytics. The software and their extensions can be freely downloaded at www.RapidMiner.com.
Understand Each Stage of the Data Mining Process
The book and software tools cover all relevant steps of the data mining process, from data loading, transformation, integration, aggregation, and visualization to automated feature selection, automated parameter and process optimization, and integration with other tools, such as R packages or your IT infrastructure via web services. The book and software also extensively discuss the analysis of unstructured data, including text and image mining.
Easily Implement Analytics Approaches Using RapidMiner and RapidAnalytics
Each chapter describes an application, how to approach it with data mining methods, and how to implement it with RapidMiner and RapidAnalytics. These application-oriented chapters give you not only the necessary analytics to solve problems and tasks, but also reproducible, step-by-step descriptions of using RapidMiner and RapidAnalytics. The case studies serve as blueprints for your own data mining applications, enabling you to effectively solve similar problems.
Data Science in R: A Case Studies Approach to Computational Reasoning and Problem Solving illustrates the details involved in solving real computational problems encountered in data analysis. It reveals the dynamic and iterative process by which data analysts approach a problem and reason about different ways of implementing solutions.
The book’s collection of projects, comprehensive sample solutions, and follow-up exercises encompass practical topics pertaining to data processing, including:
Non-standard, complex data formats, such as robot logs and email messages Text processing and regular expressions Newer technologies, such as Web scraping, Web services, Keyhole Markup Language (KML), and Google Earth Statistical methods, such as classification trees, k-nearest neighbors, and naïve Bayes Visualization and exploratory data analysis Relational databases and Structured Query Language (SQL) Simulation Algorithm implementation Large data and efficiency
Suitable for self-study or as supplementary reading in a statistical computing course, the book enables instructors to incorporate interesting problems into their courses so that students gain valuable experience and data science skills. Students learn how to acquire and work with unstructured or semistructured data as well as how to narrow down and carefully frame the questions of interest about the data.
Blending computational details with statistical and data analysis concepts, this book provides readers with an understanding of how professional data scientists think about daily computational tasks. It will improve readers’ computational reasoning of real-world data analyses.
Wil van der Aalst delivers the first book on process mining. It aims to be self-contained while covering the entire process mining spectrum from process discovery to operational support. In Part I, the author provides the basics of business process modeling and data mining necessary to understand the remainder of the book. Part II focuses on process discovery as the most important process mining task. Part III moves beyond discovering the control flow of processes and highlights conformance checking, and organizational and time perspectives. Part IV guides the reader in successfully applying process mining in practice, including an introduction to the widely used open-source tool ProM. Finally, Part V takes a step back, reflecting on the material presented and the key open challenges.Overall, this book provides a comprehensive overview of the state of the art in process mining. It is intended for business process analysts, business consultants, process managers, graduate students, and BPM researchers.
Data Mining Mobile Devices defines the collection of machine-sensed environmental data pertaining to human social behavior. It explains how the integration of data mining and machine learning can enable the modeling of conversation context, proximity sensing, and geospatial location throughout large communities of mobile users. Examines the construction and leveraging of mobile sites Describes how to use mobile apps to gather key data about consumers’ behavior and preferences Discusses mobile mobs, which can be differentiated as distinct marketplaces—including Apple®, Google®, Facebook®, Amazon®, and Twitter® Provides detailed coverage of mobile analytics via clustering, text, and classification AI software and techniques
Mobile devices serve as detailed diaries of a person, continuously and intimately broadcasting where, how, when, and what products, services, and content your consumers desire. The future is mobile—data mining starts and stops in consumers' pockets.
Describing how to analyze Wi-Fi and GPS data from websites and apps, the book explains how to model mined data through the use of artificial intelligence software. It also discusses the monetization of mobile devices’ desires and preferences that can lead to the triangulated marketing of content, products, or services to billions of consumers—in a relevant, anonymous, and personal manner.
The sixth edition is no exception. It provides an accessible, comprehensive introduction to the theory and practice of time series analysis. The treatment covers a wide range of topics, including ARIMA probability models, forecasting methods, spectral analysis, linear systems, state-space models, and the Kalman filter. It also addresses nonlinear, multivariate, and long-memory models. The author has carefully updated each chapter, added new discussions, incorporated new datasets, and made those datasets available for download from www.crcpress.com. A free online appendix on time series analysis using R can be accessed at http://people.bath.ac.uk/mascc/TSA.usingR.doc.
Highlights of the Sixth Edition:A new section on handling real data New discussion on prediction intervals A completely revised and restructured chapter on more advanced topics, with new material on the aggregation of time series, analyzing time series in finance, and discrete-valued time series A new chapter of examples and practical advice Thorough updates and revisions throughout the text that reflect recent developments and dramatic changes in computing practices over the last few years
The analysis of time series can be a difficult topic, but as this book has demonstrated for two-and-a-half decades, it does not have to be daunting. The accessibility, polished presentation, and broad coverage of The Analysis of Time Series make it simply the best introduction to the subject available.
New York Times Bestseller
“Not so different in spirit from the way public intellectuals like John Kenneth Galbraith once shaped discussions of economic policy and public figures like Walter Cronkite helped sway opinion on the Vietnam War…could turn out to be one of the more momentous books of the decade.”
—New York Times Book Review
"Nate Silver's The Signal and the Noise is The Soul of a New Machine for the 21st century."
—Rachel Maddow, author of Drift
"A serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver's coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism."
—New York Review of Books
Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. He solidified his standing as the nation's foremost political forecaster with his near perfect prediction of the 2012 election. Silver is the founder and editor in chief of FiveThirtyEight.com.
Drawing on his own groundbreaking work, Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.
In keeping with his own aim to seek truth from data, Silver visits the most successful forecasters in a range of areas, from hurricanes to baseball, from the poker table to the stock market, from Capitol Hill to the NBA. He explains and evaluates how these forecasters think and what bonds they share. What lies behind their success? Are they good—or just lucky? What patterns have they unraveled? And are their forecasts really right? He explores unanticipated commonalities and exposes unexpected juxtapositions. And sometimes, it is not so much how good a prediction is in an absolute sense that matters but how good it is relative to the competition. In other cases, prediction is still a very rudimentary—and dangerous—science.
Silver observes that the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.
With everything from the health of the global economy to our ability to fight terrorism dependent on the quality of our predictions, Nate Silver’s insights are an essential read.
From the Trade Paperback edition.
New York Times Bestseller
A former Wall Street quant sounds an alarm on the mathematical models that pervade modern life — and threaten to rip apart our social fabric
We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.
But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.
Tracing the arc of a person’s life, O’Neil exposes the black box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.
O’Neil calls on modelers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.
— Longlist for National Book Award (Non-Fiction)
— Goodreads, semi-finalist for the 2016 Goodreads Choice Awards (Science and Technology)
— Kirkus, Best Books of 2016
— New York Times, 100 Notable Books of 2016 (Non-Fiction)
— The Guardian, Best Books of 2016
— WBUR's "On Point," Best Books of 2016: Staff Picks
— Boston Globe, Best Books of 2016, Non-Fiction
The book focuses on methods based on GLMs that have been found useful in actuarial practice and provides a set of tools for a tariff analysis. Basic theory of GLMs in a tariff analysis setting is presented with useful extensions of standarde GLM theory that are not in common use.
The book meets the European Core Syllabus for actuarial education and is written for actuarial students as well as practicing actuaries. To support reader real data of some complexity are provided at www.math.su.se/GLMbook.
Operational Risk: Modeling Analytics is organized around the principle that the analysis of operational risk consists, in part, of the collection of data and the building of mathematical models to describe risk. This book is designed to provide risk analysts with a framework of the mathematical models and methods used in the measurement and modeling of operational risk in both the banking and insurance sectors.
Beginning with a foundation for operational risk modeling and a focus on the modeling process, the book flows logically to discussion of probabilistic tools for operational risk modeling and statistical methods for calibrating models of operational risk. Exercises are included in chapters involving numerical computations for students' practice and reinforcement of concepts.
Written by Harry Panjer, one of the foremost authorities in the world on risk modeling and its effects in business management, this is the first comprehensive book dedicated to the quantitative assessment of operational risk using the tools of probability, statistics, and actuarial science.
In addition to providing great detail of the many probabilistic and statistical methods used in operational risk, this book features:
* Ample exercises to further elucidate the concepts in the text
* Definitive coverage of distribution functions and related concepts
* Models for the size of losses
* Models for frequency of loss
* Aggregate loss modeling
* Extreme value modeling
* Dependency modeling using copulas
* Statistical methods in model selection and calibration
Assuming no previous expertise in either operational risk terminology or in mathematical statistics, the text is designed for beginning graduate-level courses on risk and operational management or enterprise risk management. This book is also useful as a reference for practitioners in both enterprise risk management and risk and operational management.
Important Notice: Media content referenced within the product description or the product text may not be available in the ebook version.