An audacious, irreverent investigation of human behavior—and a first look at a revolution in the making
Our personal data has been used to spy on us, hire and fire us, and sell us stuff we don’t need. In Dataclysm, Christian Rudder uses it to show us who we truly are.
For centuries, we’ve relied on polling or small-scale lab experiments to study human behavior. Today, a new approach is possible. As we live more of our lives online, researchers can finally observe us directly, in vast numbers, and without filters. Data scientists have become the new demographers.
In this daring and original book, Rudder explains how Facebook "likes" can predict, with surprising accuracy, a person’s sexual orientation and even intelligence; how attractive women receive exponentially more interview requests; and why you must have haters to be hot. He charts the rise and fall of America’s most reviled word through Google Search and examines the new dynamics of collaborative rage on Twitter. He shows how people express themselves, both privately and publicly. What is the least Asian thing you can say? Do people bathe more in Vermont or New Jersey? What do black women think about Simon & Garfunkel? (Hint: they don’t think about Simon & Garfunkel.) Rudder also traces human migration over time, showing how groups of people move from certain small towns to the same big cities across the globe. And he grapples with the challenge of maintaining privacy in a world where these explorations are possible.
Visually arresting and full of wit and insight, Dataclysm is a new way of seeing ourselves—a brilliant alchemy, in which math is made human and numbers become the narrative of our time.
From the Hardcover edition.
Get ready to change the way you think about economics.
Nobel laureate Richard H. Thaler has spent his career studying the radical notion that the central agents in the economy are humans—predictable, error-prone individuals. Misbehaving is his arresting, frequently hilarious account of the struggle to bring an academic discipline back down to earth—and change the way we think about economics, ourselves, and our world.
Traditional economics assumes rational actors. Early in his research, Thaler realized these Spock-like automatons were nothing like real people. Whether buying a clock radio, selling basketball tickets, or applying for a mortgage, we all succumb to biases and make decisions that deviate from the standards of rationality assumed by economists. In other words, we misbehave. More importantly, our misbehavior has serious consequences. Dismissed at first by economists as an amusing sideshow, the study of human miscalculations and their effects on markets now drives efforts to make better decisions in our lives, our businesses, and our governments.
Coupling recent discoveries in human psychology with a practical understanding of incentives and market behavior, Thaler enlightens readers about how to make smarter decisions in an increasingly mystifying world. He reveals how behavioral economic analysis opens up new ways to look at everything from household finance to assigning faculty offices in a new building, to TV game shows, the NFL draft, and businesses like Uber.
Laced with antic stories of Thaler’s spirited battles with the bastions of traditional economic thinking, Misbehaving is a singular look into profound human foibles. When economics meets psychology, the implications for individuals, managers, and policy makers are both profound and entertaining.
Shortlisted for the Financial Times & McKinsey Business Book of the Year Award
Understanding statistics is a requirement for obtaining and making the most of a degree in psychology, a fact of life that often takes first year psychology students by surprise. Filled with jargon-free explanations and real-life examples, Psychology Statistics For Dummies makes the often-confusing world of statistics a lot less baffling, and provides you with the step-by-step instructions necessary for carrying out data analysis.
Psychology Statistics For Dummies:Serves as an easily accessible supplement to doorstop-sized psychology textbooks Provides psychology students with psychology-specific statistics instruction Includes clear explanations and instruction on performing statistical analysis Teaches students how to analyze their data with SPSS, the most widely used statistical packages among students
Accompanying the book is the Exploratory Software for Confidence Intervals (ESCI) package, free software that runs under Excel and is accessible at www.thenewstatistics.com. The book’s exercises use ESCI's simulations, which are highly visual and interactive, to engage users and encourage exploration. Working with the simulations strengthens understanding of key statistical ideas. There are also many examples, and detailed guidance to show readers how to analyze their own data using the new statistics, and practical strategies for interpreting the results. A particular strength of the book is its explanation of meta-analysis, using simple diagrams and examples. Understanding meta-analysis is increasingly important, even at undergraduate levels, because medicine, psychology and many other disciplines now use meta-analysis to assemble the evidence needed for evidence-based practice.
The book’s pedagogical program, built on cognitive science principles, reinforces learning:
Boxes provide "evidence-based" advice on the most effective statistical techniques. Numerous examples reinforce learning, and show that many disciplines are using the new statistics. Graphs are tied in with ESCI to make important concepts vividly clear and memorable. Opening overviews and end of chapter take-home messages summarize key points. Exercises encourage exploration, deep understanding, and practical applications.
This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development , nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics is assumed.
". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis."
—Journal of the American Statistical Association
Features newly developed topics and applications of the analysis of longitudinal data
Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of longitudinal data that can be applied across various fields of study, from the health and medical sciences to the social and behavioral sciences.
The authors incorporate their extensive academic and research experience along with various updates that have been made in response to reader feedback. The Second Edition features six newly added chapters that explore topics currently evolving in the field, including:Fixed effects and mixed effects models Marginal models and generalized estimating equations Approximate methods for generalized linear mixed effects models Multiple imputation and inverse probability weighted methods Smoothing methods for longitudinal data Sample size and power
Each chapter presents methods in the setting of applications to data sets drawn from the health sciences. New problem sets have been added to many chapters, and a related website features sample programs and computer output using SAS, Stata, and R, as well as data sets and supplemental slides to facilitate a complete understanding of the material.
With its strong emphasis on multidisciplinary applications and the interpretation of results, Applied Longitudinal Analysis, Second Edition is an excellent book for courses on statistics in the health and medical sciences at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for researchers and professionals in the medical, public health, and pharmaceutical fields as well as those in social and behavioral sciences who would like to learn more about analyzing longitudinal data.
· Downloadable data sets
· Library of computer programs in SAS, SPSS, Stata, HLM, MLwiN, and more
· Additional material for data analysis
Drawing on examples from across the social sciences, this book covers everything you need to know to plan, implement, and analyze the results of population-based survey experiments. But it is more than just a "how to" manual. This lively book challenges conventional wisdom about internal and external validity, showing why strong causal claims need not come at the expense of external validity, and how it is now possible to execute experiments remotely using large-scale population samples.
Designed for social scientists across the disciplines, Population-Based Survey Experiments provides the first complete introduction to this methodology.
Offers the most comprehensive treatment of the subject
Features a wealth of examples and practical advice
Reexamines issues of internal and external validity
Can be used in conjunction with downloadable data from ExperimentCentral.org for design and analysis exercises in the classroom
This expanded edition includes new data and easy-to-read graphics explaining the 2008 election. Red State, Blue State, Rich State, Poor State is a must-read for anyone seeking to make sense of today's fractured political landscape.
A black swan is a highly improbable event with three principal characteristics: It is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was. The astonishing success of Google was a black swan; so was 9/11. For Nassim Nicholas Taleb, black swans underlie almost everything about our world, from the rise of religions to events in our own personal lives.
Why do we not acknowledge the phenomenon of black swans until after they occur? Part of the answer, according to Taleb, is that humans are hardwired to learn specifics when they should be focused on generalities. We concentrate on things we already know and time and time again fail to take into consideration what we don’t know. We are, therefore, unable to truly estimate opportunities, too vulnerable to the impulse to simplify, narrate, and categorize, and not open enough to rewarding those who can imagine the “impossible.”
For years, Taleb has studied how we fool ourselves into thinking we know more than we actually do. We restrict our thinking to the irrelevant and inconsequential, while large events continue to surprise us and shape our world. In this revelatory book, Taleb explains everything we know about what we don’t know, and this second edition features a new philosophical and empirical essay, “On Robustness and Fragility,” which offers tools to navigate and exploit a Black Swan world.
Elegant, startling, and universal in its applications, The Black Swan will change the way you look at the world. Taleb is a vastly entertaining writer, with wit, irreverence, and unusual stories to tell. He has a polymathic command of subjects ranging from cognitive science to business to probability theory. The Black Swan is a landmark book—itself a black swan.
Praise for Nassim Nicholas Taleb
“The most prophetic voice of all.”—GQ
Praise for The Black Swan
“[A book] that altered modern thinking.”—The Times (London)
“A masterpiece.”—Chris Anderson, editor in chief of Wired, author of The Long Tail
“Idiosyncratically brilliant.”—Niall Ferguson, Los Angeles Times
“The Black Swan changed my view of how the world works.”—Daniel Kahneman, Nobel laureate
“[Taleb writes] in a style that owes as much to Stephen Colbert as it does to Michel de Montaigne. . . . We eagerly romp with him through the follies of confirmation bias [and] narrative fallacy.”—The Wall Street Journal
“Hugely enjoyable—compelling . . . easy to dip into.”—Financial Times
“Engaging . . . The Black Swan has appealing cheek and admirable ambition.”—The New York Times Book Review
This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables.
Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-the-art techniques for building, interpreting, and assessing the performance of LR models. New and updated features include:A chapter on the analysis of correlated outcome data A wealth of additional material for topics ranging from Bayesian methods to assessing model fit Rich data sets from real-world studies that demonstrate each method under discussion Detailed examples and interpretation of the presented results as well as exercises throughout
Applied Logistic Regression, Third Edition is a must-have guide for professionals and researchers who need to model nominal or ordinal scaled outcome variables in public health, medicine, and the social sciences as well as a wide range of other fields and disciplines.
New to This Edition
*Updated throughout to incorporate important developments in latent variable modeling.
*Chapter on Bayesian CFA and multilevel measurement models.
*Addresses new topics (with examples): exploratory structural equation modeling, bifactor analysis, measurement invariance evaluation with categorical indicators, and a new method for scaling latent variables.
*Utilizes the latest versions of major latent variable software packages.
Looking for an easily accessible overview of research methods in psychology? This is the book for you! Whether you need to get ahead in class, you're pressed for time, or you just want a take on a topic that's not covered in your textbook, Research Methods in Psychology For Dummies has you covered.
Written in plain English and packed with easy-to-follow instruction, this friendly guide takes the intimidation out of the subject and tackles the fundamentals of psychology research in a way that makes it approachable and comprehensible, no matter your background. Inside, you'll find expert coverage of qualitative and quantitative research methods, including surveys, case studies, laboratory observations, tests and experiments—and much more.Serves as an excellent supplement to course textbooks Provides a clear introduction to the scientific method Presents the methodologies and techniques used in psychology research Written by the authors of Psychology Statistics For Dummies
If you're a first or second year psychology student and want to supplement your doorstop-sized psychology textbook—and boost your chances of scoring higher at exam time—this hands-on guide breaks down the subject into easily digestible bits and propels you towards success.
Learn to evaluate and apply statistics in medicine, medical research, and all health-related fields.Emphasis on the basics of biostatistics and epidemiology and the clinical applications in evidence-based medicine and decision-making methods NEW chapter on survey research Expanded discussion of logistic regression, the Cox model, and other multivariate statistical methods Key Concepts in each chapter pinpoint essential information Presenting Problems drawn from studies in the medical literature that illustrate the various statistical methods Downloadable NCSS statistical software, procedures, and data sets from the presenting problems End-of-chapter exercises Multiple-choice final practice exam
Treating these topics together takes advantage of all they have in common. The authors point out the many-shared elements in the methods they present for selecting, estimating, checking, and interpreting each of these models. They also show that these regression methods deal with confounding, mediation, and interaction of causal effects in essentially the same way.
The examples, analyzed using Stata, are drawn from the biomedical context but generalize to other areas of application. While a first course in statistics is assumed, a chapter reviewing basic statistical methods is included. Some advanced topics are covered but the presentation remains intuitive. A brief introduction to regression analysis of complex surveys and notes for further reading are provided. For many students and researchers learning to use these methods, this one book may be all they need to conduct and interpret multipredictor regression analyses.
The authors are on the faculty in the Division of Biostatistics, Department of Epidemiology and Biostatistics, University of California, San Francisco, and are authors or co-authors of more than 200 methodological as well as applied papers in the biological and biomedical sciences. The senior author, Charles E. McCulloch, is head of the Division and author of Generalized Linear Mixed Models (2003), Generalized, Linear, and Mixed Models (2000), and Variance Components (1992).
From the reviews:
"This book provides a unified introduction to the regression methods listed in the title...The methods are well illustrated by data drawn from medical studies...A real strength of this book is the careful discussion of issues common to all of the multipredictor methods covered." Journal of Biopharmaceutical Statistics, 2005
"This book is not just for biostatisticians. It is, in fact, a very good, and relatively nonmathematical, overview of multipredictor regression models. Although the examples are biologically oriented, they are generally easy to understand and follow...I heartily recommend the book" Technometrics, February 2006
"Overall, the text provides an overview of regression methods that is particularly strong in its breadth of coverage and emphasis on insight in place of mathematical detail. As intended, this well-unified approach should appeal to students who learn conceptually and verbally." Journal of the American Statistical Association, March 2006
Updated throughout, the second edition features three new chapters—growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group issues (analyzing growth in multiple populations, accelerated designs, and multi-level longitudinal approaches), and then special topics such as missing data models, LGM power and Monte Carlo estimation, and latent growth interaction models. The model specifications previously included in the appendices are now available on the CD so the reader can more easily adapt the models to their own research.
This practical guide is ideal for a wide range of social and behavioral researchers interested in the measurement of change over time, including social, developmental, organizational, educational, consumer, personality and clinical psychologists, sociologists, and quantitative methodologists, as well as for a text on latent variable growth curve modeling or as a supplement for a course on multivariate statistics. A prerequisite of graduate level statistics is recommended.
“This book will serve to greatly complement the growing number of texts dealing with mixed models, and I highly recommend including it in one’s personal library.”
—Journal of the American Statistical Association
Mixed modeling is a crucial area of statistics, enabling the analysis of clustered and longitudinal data. Mixed Models: Theory and Applications with R, Second Edition fills a gap in existing literature between mathematical and applied statistical books by presenting a powerful examination of mixed model theory and application with special attention given to the implementation in R.
The new edition provides in-depth mathematical coverage of mixed models’ statistical properties and numerical algorithms, as well as nontraditional applications, such as regrowth curves, shapes, and images. The book features the latest topics in statistics including modeling of complex clustered or longitudinal data, modeling data with multiple sources of variation, modeling biological variety and heterogeneity, Healthy Akaike Information Criterion (HAIC), parameter multidimensionality, and statistics of image processing.
Mixed Models: Theory and Applications with R, Second Edition features unique applications of mixed model methodology, as well as:Comprehensive theoretical discussions illustrated by examples and figures Over 300 exercises, end-of-section problems, updated data sets, and R subroutines Problems and extended projects requiring simulations in R intended to reinforce material Summaries of major results and general points of discussion at the end of each chapter Open problems in mixed modeling methodology, which can be used as the basis for research or PhD dissertations
Ideal for graduate-level courses in mixed statistical modeling, the book is also an excellent reference for professionals in a range of fields, including cancer research, computer science, and engineering.
In the age of Big Data we often believe that our predictions about the future are better than ever before. But as risk expert Gerd Gigerenzer shows, the surprising truth is that in the real world, we often get better results by using simple rules and considering less information.
In Risk Savvy, Gigerenzer reveals that most of us, including doctors, lawyers, financial advisers, and elected officials, misunderstand statistics much more often than we think, leaving us not only misinformed, but vulnerable to exploitation. Yet there is hope. Anyone can learn to make better decisions for their health, finances, family, and business without needing to consult an expert or a super computer, and Gigerenzer shows us how.
Risk Savvy is an insightful and easy-to-understand remedy to our collective information overload and an essential guide to making smart, confident decisions in the face of uncertainty.
This volume provides formulas and procedures for determination of sample size required not only for testing equality, but also for testing non-inferiority/superiority, and equivalence (similarity) based on both untransformed (raw) data and log-transformed data under a parallel-group design or a crossover design with equal or unequal ratio of treatment allocations. It contains a comprehensive and unified presentation of statistical procedures for sample size calculation that are commonly employed at various phases of clinical development. Each chapter includes, whenever possible, real examples of clinical studies from therapeutic areas such as cardiovascular, central nervous system, anti-infective, oncology, and women's health to demonstrate the clinical and statistical concepts, interpretations, and their relationships and interactions.
The book highlights statistical procedures for sample size calculation and justification that are commonly employed in clinical research and development. It provides clear, illustrated explanations of how the derived formulas and/or statistical procedures can be used.
This new edition of Medical Statistics at a Glance:Presents key facts accompanied by clear and informative tables and diagrams Focuses on illustrative examples which show statistics in action, with an emphasis on the interpretation of computer data analysis rather than complex hand calculations Includes extensive cross-referencing, a comprehensive glossary of terms and flow-charts to make it easier to choose appropriate tests Now provides the learning objectives for each chapter Includes a new chapter on Developing Prognostic Scores Includes new or expanded material on study management, multi-centre studies, sequential trials, bias and different methods to remove confounding in observational studies, multiple comparisons, ROC curves and checking assumptions in a logistic regression analysis The companion website at www.medstatsaag.com contains supplementary material including an extensive reference list and multiple choice questions (MCQs) with interactive answers for self-assessment.
Medical Statistics at a Glance will appeal to all medical students, junior doctors and researchers in biomedical and pharmaceutical disciplines.
Reviews of the previous editions
"The more familiar I have become with this book, the more I appreciate the clear presentation and unthreatening prose. It is now a valuable companion to my formal statistics course."
–International Journal of Epidemiology
"I heartily recommend it, especially to first years, but it's equally appropriate for an intercalated BSc or Postgraduate research. If statistics give you headaches - buy it. If statistics are all you think about - buy it."
"...I unreservedly recommend this book to all medical students, especially those that dislike reading reams of text. This is one book that will not sit on your shelf collecting dust once you have graduated and will also function as a reference book."
–4th Year Medical Student, Barts and the London Chronicle, Spring 2003
• Introduces requisite background to using Nonlinear Mixed Effects Modeling (NONMEM), covering data requirements, model building and evaluation, and quality control aspects
• Provides examples of nonlinear modeling concepts and estimation basics with discussion on the model building process and applications of empirical Bayesian estimates in the drug development environment
• Includes detailed chapters on data set structure, developing control streams for modeling and simulation, model applications, interpretation of NONMEM output and results, and quality control
• Has datasets, programming code, and practice exercises with solutions, available on a supplementary website
The new edition features:
Each chapter begins with an outline, a list of key concepts, and a research vignette related to the concepts. Realistic examples from education and the behavioral sciences illustrate those concepts. Each example examines the procedures and assumptions and provides tips for how to run SPSS and develop an APA style write-up. Tables of assumptions and the effects of their violation are included, along with how to test assumptions in SPSS. Each chapter includes computational, conceptual, and interpretive problems. Answers to the odd-numbered problems are provided. The SPSS data sets that correspond to the book’s examples and problems are available on the web.
The book covers basic and advanced analysis of variance models and topics not dealt with in other texts such as robust methods, multiple comparison and non-parametric procedures, and multiple and logistic regression models. Intended for courses in intermediate statistics and/or statistics II taught in education and/or the behavioral sciences, predominantly at the master's or doctoral level. Knowledge of introductory statistics is assumed.
Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods.
This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures.
Multiple Imputation and its Application:Discusses the issues raised by the analysis of partially observed data, and the assumptions on which analyses rest. Presents a practical guide to the issues to consider when analysing incomplete data from both observational studies and randomized trials. Provides a detailed discussion of the practical use of MI with real-world examples drawn from medical and social statistics. Explores handling non-linear relationships and interactions with multiple imputation, survival analysis, multilevel multiple imputation, sensitivity analysis via multiple imputation, using non-response weights with multiple imputation and doubly robust multiple imputation.
Multiple Imputation and its Application is aimed at quantitative researchers and students in the medical and social sciences with the aim of clarifying the issues raised by the analysis of incomplete data data, outlining the rationale for MI and describing how to consider and address the issues that arise in its application.
The aim of this book is to show how R can be used as the software tool in the development of Six Sigma projects. The book includes a gentle introduction to Six Sigma and a variety of examples showing how to use R within real situations. It has been conceived as a self contained piece. Therefore, it is addressed not only to Six Sigma practitioners, but also to professionals trying to initiate themselves in this management methodology. The book may be used as a text book as well.
The book provides clear coverage of statistical procedures, and includes everything needed from nominal level tests to multi-factorial ANOVA designs, multiple regression and log linear analysis. It features detailed and illustrated SPSS instructions for all these procedures eliminating the need for an extra SPSS textbook.
New features in the sixth edition include:
"Tricky bits" - in-depth notes on the things that students typically have problems with, including common misunderstandings and likely mistakes.
Improved coverage of qualitative methods and analysis, plus updates to Grounded Theory, Interpretive Phenomenological Analysis and Discourse Analysis.
A full and recently published journal article using Thematic Analysis, illustrating how articles appear in print.
Discussion of contemporary issues and debates, including recent coverage of journals’ reluctance to publish replication of studies.
Fully updated online links, offering even more information and useful resources, especially for statistics.
Each chapter contains a glossary, key terms and newly integrated exercises, ensuring that key concepts are understood. A companion website (www.routledge.com/cw/coolican) provides additional exercises, revision flash cards, links to further reading and data for use with SPSS.
Focusing mainly on the day-laboring district of Yokohama, and with extensive comparative ethnography from five other cities, author Tom Gill finds a society of men who have opted out of the regular, communal way of life. This book details their libertarian, egalitarian lifestyle, oriented to the present yet colored by an awareness that in Japan today being a yoseba man usually means exclusion from mainstream society, absence of family life, and a career that can easily lead to homelessness and an early death on the street.
This text is intended for a broad audience as both an introduction to predictive models as well as a guide to applying them. Non-mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics.
This book focuses on imitating analyses that are based on variance by replacing variance with the GMD and its variants. In this way, the text showcases how almost everything that can be done with the variance as a measure of variability, can be replicated by using Gini. Beyond this, there are marked benefits to utilizing Gini as opposed to other methods. One of the advantages of using Gini methodology is that it provides a unified system that enables the user to learn about various aspects of the underlying distribution. It also provides a systematic method and a unified terminology.
Using Gini methodology can reduce the risk of imposing assumptions that are not supported by the data on the model. With these benefits in mind the text uses the covariance-based approach, though applications to other approaches are mentioned as well.
New to This Edition
*Extensively revised to cover important new topics: Pearl's graphing theory and the SCM, causal inference frameworks, conditional process modeling, path models for longitudinal data, item response theory, and more.
*Chapters on best practices in all stages of SEM, measurement invariance in confirmatory factor analysis, and significance testing issues and bootstrapping.
*Expanded coverage of psychometrics.
*Additional computer tools: online files for all detailed examples, previously provided in EQS, LISREL, and Mplus, are now also given in Amos, Stata, and R (lavaan).
*Reorganized to cover the specification, identification, and analysis of observed variable models separately from latent variable models.
*Exercises with answers, plus end-of-chapter annotated lists of further reading.
*Real examples of troublesome data, demonstrating how to handle typical problems in analyses.
*Topic boxes on specialized issues, such as causes of nonpositive definite correlations.
*Boxed rules to remember.
*Website promoting a learn-by-doing approach, including syntax and data files for six widely used SEM computer tools.
Highlights of the new edition include:
Updated throughout to reflect IBM SPSS Version 21.
Further coverage of growth trajectories, coding time-related variables, covariance structures, individual change and longitudinal experimental designs (Ch.5).
Extended discussion of other types of research designs for examining change (e.g., regression discontinuity, quasi-experimental) over time (Ch.6).
New examples specifying multiple latent constructs and parallel growth processes (Ch. 7).
Discussion of alternatives for dealing with missing data and the use of sample weights within multilevel data structures (Ch.1).
The book opens with the conceptual and methodological issues associated with multilevel and longitudinal modeling, followed by a discussion of SPSS data management techniques which facilitate working with multilevel, longitudinal, and cross-classified data sets. Chapters 3 and 4 introduce the basics of multilevel modeling: developing a multilevel model, interpreting output, and trouble-shooting common programming and modeling problems. Models for investigating individual and organizational change are presented in chapters 5 and 6, followed by models with multivariate outcomes in chapter 7. Chapter 8 provides an illustration of multilevel models with cross-classified data structures. The book concludes with ways to expand on the various multilevel and longitudinal modeling techniques and issues when conducting multilevel analyses.
Ideal as a supplementary text for graduate courses on multilevel and longitudinal modeling, multivariate statistics, and research design taught in education, psychology, business, and sociology, this book’s practical approach also appeals to researchers in these fields. The book provides an excellent supplement to Heck & Thomas’s An Introduction to Multilevel Modeling Techniques, 2nd Edition; however, it can also be used with any multilevel and/or longitudinal modeling book or as a stand-alone text.
“Privitera does an EXCELLENT job of balancing clarity with depth.” —Ronald W. Stoffey, Kutztown University of Pennsylvania
“The writing style and the presentation of material are not only ENJOYABLE TO READ but are EASY TO FOLLOW and understand.” —Joshua J. Dobias, Rutgers University
“Privitera ties research methods, SPSS, and statistics together in a SEAMLESS fashion.” —Walter M. Yamada, Azusa Pacific University
“I like the objectives, the readability of the text, the straightforwardness of the presentations of concepts, the problems that are quite APPROPRIATE ON MANY LEVELS (computation, theory, etc.), and the emphasis on SPSS.” —Ted R. Bitner, DePauw University
· Downloadable data sets
· Library of computer programs in SAS, SPSS, Stata, HLM, MLwiN, and more
· Additional material for data analysis
Peter Andreas and Kelly M. Greenhill see only one problem: these numbers are probably false. Their continued use and abuse reflect a much larger and troubling pattern: policymakers and the media naively or deliberately accept highly politicized and questionable statistical claims about activities that are extremely difficult to measure. As a result, we too often become trapped by these mythical numbers, with perverse and counterproductive consequences.
This problem exists in myriad policy realms. But it is particularly pronounced in statistics related to the politically charged realms of global crime and conflict-numbers of people killed in massacres and during genocides, the size of refugee flows, the magnitude of the illicit global trade in drugs and human beings, and so on. In Sex, Drugs, and Body Counts, political scientists, anthropologists, sociologists, and policy analysts critically examine the murky origins of some of these statistics and trace their remarkable proliferation. They also assess the standard metrics used to evaluate policy effectiveness in combating problems such as terrorist financing, sex trafficking, and the drug trade.
Contributors: Peter Andreas, Brown University; Thomas J. Biersteker, Graduate Institute of International and Development Studies-Geneva; Sue E. Eckert, Brown University; David A. Feingold, Ophidian Research Institute and UNESCO; H. Richard Friman, Marquette University; Kelly M. Greenhill, Tufts University and Harvard University; John Hagan, Northwestern University; Lara J. Nettelfield, Institut Barcelona D'Estudis Internacionals and Simon Fraser University; Wenona Rymond-Richmond, University of Massachusetts Amherst; Winifred Tate, Colby College; Kay B. Warren, Brown University
Sampling of Populations, Fourth Edition continues to serve as an all-inclusive resource on the basic and most current practices in population sampling. Maintaining the clear and accessible style of the previous edition, this book outlines the essential statistical methodsfor survey design and analysis, while also exploring techniques that have developed over the past decade.
The Fourth Edition successfully guides the reader through the basic concepts and procedures that accompany real-world sample surveys, such as sampling designs, problems of missing data, statistical analysis of multistage sampling data, and nonresponse and poststratification adjustment procedures. Rather than employ a heavily mathematical approach, the authors present illustrative examples that demonstrate the rationale behind common steps in the sampling process, from creating effective surveys to analyzing collected data. Along with established methods, modern topics are treated through the book's new features, which include:A new chapter on telephone sampling, with coverage of declining response rates, the creation of "do not call" lists, and the growing use of cellular phones A new chapter on sample weighting that focuses on adjustments to weight for nonresponse, frame deficiencies, and the effects of estimator instability An updated discussion of sample survey data analysis that includes analytic procedures for estimation and hypothesis testing A new section on Chromy's widely used method of taking probability proportional to size samples with minimum replacement of primary sampling units An expanded index with references on the latest research in the field
All of the book's examples and exercises can be easily worked out using various software packages including SAS, STATA, and SUDAAN, and an extensive FTP site contains additional data sets. With its comprehensive presentation and wealth of relevant examples, Sampling of Populations, Fourth Edition is an ideal book for courses on survey sampling at the upper-undergraduate and graduate levels. It is also a valuable reference for practicing statisticians who would like to refresh their knowledge of sampling techniques.
Several survey data sets are used to illustrate how to design samples, to make estimates from complex surveys for use in optimizing the sample allocation, and to calculate weights. Realistic survey projects are used to demonstrate the challenges and provide a context for the solutions. The book covers several topics that either are not included or are dealt with in a limited way in other texts. These areas include: sample size computations for multistage designs; power calculations related to surveys; mathematical programming for sample allocation in a multi-criteria optimization setting; nuts and bolts of area probability sampling; multiphase designs; quality control of survey operations; and statistical software for survey sampling and estimation. An associated R package, PracTools, contains a number of specialized functions for sample size and other calculations. The data sets used in the book are also available in PracTools, so that the reader may replicate the examples or perform further analyses.
Keeping computational challenges to a minimum, Reid shows readers not only how to conduct a variety of commonly used statistical procedures, but also when each procedure should be utilized and how they are related. Following a review of descriptive statistics, he begins his discussion of inferential statistics with a two-chapter examination of the Chi Square test to introduce students to hypothesis testing, the importance of determining effect size, and the need for post hoc tests. When more complex procedures related to interval/ratio data are covered, students already have a solid understanding of the foundational concepts involved. Exploring challenging topics in an engaging and easy-to-follow manner, Reid builds concepts logically and supports learning through robust pedagogical tools, the use of SPSS, numerous examples, historical quotations, insightful questions, and helpful progress checks.
CD-ROM performs 30 statistical tests
Don't be afraid of biostatistics anymore! Primer of Biostatistics,7th Edition demystifies this challenging topic in an interesting and enjoyable manner that assumes no prior knowledge of the subject. Faster than you thought possible, you'll understand test selection and be able to evaluate biomedical statistics critically, knowledgeably, and confidently.
With Primer of Biostatistics, you’ll start with the basics, including analysis of variance and the t test, then advance to multiple comparison testing, contingency tables, regression, and more. Illustrative examples and challenging problems, culled from the recent biomedical literature, highlight the discussions throughout and help to foster a more intuitive approach to biostatistics.
The companion CD-ROM contains everything you need to run thirty statistical tests of your own data. Review questions and summaries in each chapter facilitate the learning process and help you gauge your comprehension. By combining whimsical studies of Martians and other planetary residents with actual papers from the biomedical literature, the author makes the subject fun and engaging.
Coverage includes:How to summarize data How to test for differences between groups The t test How to analyze rates and proportions What does “not significant” really mean? Confidence intervals How to test for trends Experiments when each subject receives more than one treatment Alternatives to analysis of variance and the t test based on ranks How to analyze survival data
* Easy-to-follow format incorporates medical examples, step-by-step methods, and check yourself exercises
* Two-part design features course material and a professional reference section
* Chapter summaries provide a review of formulas, method algorithms, and check lists
* Companion site links to statistical databases that can be downloaded and used to perform the exercises from the book and practice statistical methods
New in this Edition:
* New chapters on: multifactor tests on means of continuous data, equivalence testing, and advanced methods
* New topics include: trial randomization, treatment ethics in medical research, imputation of missing data, and making evidence-based medical decisions
* Updated database coverage and additional exercises
* Expanded coverage of numbers needed to treat and to benefit, and regression analysis including stepwise regression and Cox regression
Thorough discussion on required sample size