Featuring examples from EQS, LISREL, and Mplus, A First Course in Structural Equation Modeling is an excellent beginner’s guide to learning how to set up input files to fit the most commonly used types of structural equation models with these programs. The basic ideas and methods for conducting SEM are independent of any particular software.
Highlights of the Second Edition include:
• Review of latent change (growth) analysis models at an introductory level
• Coverage of the popular Mplus program
• Updated examples of LISREL and EQS
• A CD that contains all of the text’s LISREL, EQS, and Mplus examples.
A First Course in Structural Equation Modeling is intended as an introductory book for students and researchers in psychology, education, business, medicine, and other applied social, behavioral, and health sciences with limited or no previous exposure to SEM. A prerequisite of basic statistics through regression analysis is recommended. The book frequently draws parallels between SEM and regression, making this prior knowledge helpful.
His answer is that we pay too much attention to what successful people are like, and too little attention to where they are from: that is, their culture, their family, their generation, and the idiosyncratic experiences of their upbringing. Along the way he explains the secrets of software billionaires, what it takes to be a great soccer player, why Asians are good at math, and what made the Beatles the greatest rock band.
Brilliant and entertaining, Outliers is a landmark work that will simultaneously delight and illuminate.
A book specific website - www.psypress.com/applied-multivariate-analysis - provides files with all of the data used in the text so readers can replicate the results. The Appendix explains the data files and its variables. The software code (for SAS and Mplus) and the menu option selections for SPSS are also discussed in the book. The book is distinguished by its use of latent variable modeling to address multivariate questions specific to behavioral and social scientists including missing data analysis and longitudinal data modeling.
Ideal for graduate and advanced undergraduate students in the behavioral, social, and educational sciences, this book will also appeal to researchers in these disciplines who have limited familiarity with multivariate statistics. Recommended prerequisites include an introductory statistics course with exposure to regression analysis and some familiarity with SPSS and SAS.
Get ready to change the way you think about economics.
Nobel laureate Richard H. Thaler has spent his career studying the radical notion that the central agents in the economy are humans—predictable, error-prone individuals. Misbehaving is his arresting, frequently hilarious account of the struggle to bring an academic discipline back down to earth—and change the way we think about economics, ourselves, and our world.
Traditional economics assumes rational actors. Early in his research, Thaler realized these Spock-like automatons were nothing like real people. Whether buying a clock radio, selling basketball tickets, or applying for a mortgage, we all succumb to biases and make decisions that deviate from the standards of rationality assumed by economists. In other words, we misbehave. More importantly, our misbehavior has serious consequences. Dismissed at first by economists as an amusing sideshow, the study of human miscalculations and their effects on markets now drives efforts to make better decisions in our lives, our businesses, and our governments.
Coupling recent discoveries in human psychology with a practical understanding of incentives and market behavior, Thaler enlightens readers about how to make smarter decisions in an increasingly mystifying world. He reveals how behavioral economic analysis opens up new ways to look at everything from household finance to assigning faculty offices in a new building, to TV game shows, the NFL draft, and businesses like Uber.
Laced with antic stories of Thaler’s spirited battles with the bastions of traditional economic thinking, Misbehaving is a singular look into profound human foibles. When economics meets psychology, the implications for individuals, managers, and policy makers are both profound and entertaining.
Shortlisted for the Financial Times & McKinsey Business Book of the Year Award
An audacious, irreverent investigation of human behavior—and a first look at a revolution in the making
Our personal data has been used to spy on us, hire and fire us, and sell us stuff we don’t need. In Dataclysm, Christian Rudder uses it to show us who we truly are.
For centuries, we’ve relied on polling or small-scale lab experiments to study human behavior. Today, a new approach is possible. As we live more of our lives online, researchers can finally observe us directly, in vast numbers, and without filters. Data scientists have become the new demographers.
In this daring and original book, Rudder explains how Facebook "likes" can predict, with surprising accuracy, a person’s sexual orientation and even intelligence; how attractive women receive exponentially more interview requests; and why you must have haters to be hot. He charts the rise and fall of America’s most reviled word through Google Search and examines the new dynamics of collaborative rage on Twitter. He shows how people express themselves, both privately and publicly. What is the least Asian thing you can say? Do people bathe more in Vermont or New Jersey? What do black women think about Simon & Garfunkel? (Hint: they don’t think about Simon & Garfunkel.) Rudder also traces human migration over time, showing how groups of people move from certain small towns to the same big cities across the globe. And he grapples with the challenge of maintaining privacy in a world where these explorations are possible.
Visually arresting and full of wit and insight, Dataclysm is a new way of seeing ourselves—a brilliant alchemy, in which math is made human and numbers become the narrative of our time.
From the Hardcover edition.
To reflect the growing use of statistical software in psychometrics, the authors introduce the use of Mplus after the first few chapters. IBM SPSS, SAS, and R are also featured in several chapters. Software codes and associated outputs are reviewed throughout to enhance comprehension. Essentially all of the data used in the book are available on the website. In addition instructors will find helpful PowerPoint lecture slides and questions and problems for each chapter.
The authors rely on LVM when discussing fundamental concepts such as exploratory and confirmatory factor analysis, test theory, generalizability theory, reliability and validity, interval estimation, nonlinear factor analysis, generalized linear modeling, and item response theory. The varied applications make this book a valuable tool for those in the behavioral, social, educational, and biomedical disciplines, as well as in business, economics, and marketing. A brief introduction to R is also provided.
Intended as a text for advanced undergraduate and/or graduate courses in psychometrics, testing and measurement, measurement theory, psychological testing, and/or educational and/or psychological measurement taught in departments of psychology, education, human development, epidemiology, business, and marketing, it will also appeal to researchers in these disciplines. Prerequisites include an introduction to statistics with exposure to regression analysis and ANOVA. Familiarity with SPSS, SAS, STATA, or R is also beneficial. As a whole, the book provides an invaluable introduction to measurement and test theory to those with limited or no familiarity with the mathematical and statistical procedures involved in measurement and testing.
A black swan is a highly improbable event with three principal characteristics: It is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was. The astonishing success of Google was a black swan; so was 9/11. For Nassim Nicholas Taleb, black swans underlie almost everything about our world, from the rise of religions to events in our own personal lives.
Why do we not acknowledge the phenomenon of black swans until after they occur? Part of the answer, according to Taleb, is that humans are hardwired to learn specifics when they should be focused on generalities. We concentrate on things we already know and time and time again fail to take into consideration what we don’t know. We are, therefore, unable to truly estimate opportunities, too vulnerable to the impulse to simplify, narrate, and categorize, and not open enough to rewarding those who can imagine the “impossible.”
For years, Taleb has studied how we fool ourselves into thinking we know more than we actually do. We restrict our thinking to the irrelevant and inconsequential, while large events continue to surprise us and shape our world. In this revelatory book, Taleb explains everything we know about what we don’t know, and this second edition features a new philosophical and empirical essay, “On Robustness and Fragility,” which offers tools to navigate and exploit a Black Swan world.
Elegant, startling, and universal in its applications, The Black Swan will change the way you look at the world. Taleb is a vastly entertaining writer, with wit, irreverence, and unusual stories to tell. He has a polymathic command of subjects ranging from cognitive science to business to probability theory. The Black Swan is a landmark book—itself a black swan.
Praise for Nassim Nicholas Taleb
“The most prophetic voice of all.”—GQ
Praise for The Black Swan
“[A book] that altered modern thinking.”—The Times (London)
“A masterpiece.”—Chris Anderson, editor in chief of Wired, author of The Long Tail
“Idiosyncratically brilliant.”—Niall Ferguson, Los Angeles Times
“The Black Swan changed my view of how the world works.”—Daniel Kahneman, Nobel laureate
“[Taleb writes] in a style that owes as much to Stephen Colbert as it does to Michel de Montaigne. . . . We eagerly romp with him through the follies of confirmation bias [and] narrative fallacy.”—The Wall Street Journal
“Hugely enjoyable—compelling . . . easy to dip into.”—Financial Times
“Engaging . . . The Black Swan has appealing cheek and admirable ambition.”—The New York Times Book Review
The chapters on measurement discuss generalizability theory, latent trait and latent class models, and multi-faceted Rasch modeling. The chapters on decision analysis feature applied location theory models, data envelopment analysis, and heuristic search procedures. The chapters on modeling examine exploratory and confirmatory factor analysis, dynamic factor analysis, partial least squares and structural equation modeling, multilevel data analysis, modeling of longitudinal data by latent growth curve methods and structures, and configural models of longitudinal categorical data.
Understanding statistics is a requirement for obtaining andmaking the most of a degree in psychology, a fact of life thatoften takes first year psychology students by surprise. Filled withjargon-free explanations and real-life examples, PsychologyStatistics For Dummies makes the often-confusing world ofstatistics a lot less baffling, and provides you with thestep-by-step instructions necessary for carrying out dataanalysis.
Psychology Statistics For Dummies:Serves as an easily accessible supplement to doorstop-sizedpsychology textbooksProvides psychology students with psychology-specificstatistics instructionIncludes clear explanations and instruction on performingstatistical analysisTeaches students how to analyze their data with SPSS, the mostwidely used statistical packages among students
This volume will be of interest to researchers and practitioners from a wide variety of disciplines, including biology, business, economics, education, medicine, psychology, sociology, and other social and behavioral sciences. A working knowledge of basic multivariate statistics and measurement theory is assumed.
An introductory text for students learning multivariate statistical methods for the first time, this book keeps mathematical details to a minimum while conveying the basic principles. One of the principal strategies used throughout the book--in addition to the presentation of actual data analyses--is pointing out the analogy between a common univariate statistical technique and the corresponding multivariate method. Many computer examples--drawing on SAS software --are used as demonstrations.
Throughout the book, the computer is used as an adjunct to the presentation of a multivariate statistical method in an empirically oriented approach. Basically, the model adopted in this book is to first present the theory of a multivariate statistical method along with the basic mathematical computations necessary for the analysis of data. Subsequently, a real world problem is discussed and an example data set is provided for analysis. Throughout the presentation and discussion of a method, many references are made to the computer, output are explained, and exercises and examples with real data are included.
This book is ideal for anyone who likes puzzles, brainteasers, games, gambling, magic tricks, and those who want to apply math and science to everyday circumstances. Several hacks in the first chapter alone-such as the "central limit theorem,", which allows you to know everything by knowing just a little-serve as sound approaches for marketing and other business objectives. Using the tools of inferential statistics, you can understand the way probability works, discover relationships, predict events with uncanny accuracy, and even make a little money with a well-placed wager here and there.
Statistics Hacks presents useful techniques from statistics, educational and psychological measurement, and experimental research to help you solve a variety of problems in business, games, and life. You'll learn how to:Play smart when you play Texas Hold 'Em, blackjack, roulette, dice games, or even the lotteryDesign your own winnable bar bets to make money and amaze your friendsPredict the outcomes of baseball games, know when to "go for two" in football, and anticipate the winners of other sporting events with surprising accuracyDemystify amazing coincidences and distinguish the truly random from the only seemingly random--even keep your iPod's "random" shuffle honestSpot fraudulent data, detect plagiarism, and break codesHow to isolate the effects of observation on the thing observed
Whether you're a statistics enthusiast who does calculations in your sleep or a civilian who is entertained by clever solutions to interesting problems, Statistics Hacks has tools to give you an edge over the world's slim odds.
This expanded edition includes new data and easy-to-read graphics explaining the 2008 election. Red State, Blue State, Rich State, Poor State is a must-read for anyone seeking to make sense of today's fractured political landscape.
But Tim Harford, award-winning journalist and author of the bestseller The Undercover Economist, likes to spring surprises. In this deftly reasoned book, Harford argues that life is logical after all. Under the surface of everyday insanity, hidden incentives are at work, and Harford shows these incentives emerging in the most unlikely places.
Using tools ranging from animal experiments to supercomputer simulations, an ambitious new breed of economist is trying to unlock the secrets of society. The Logic of Life is the first book to map out the astonishing insights and frustrating blind spots of this new economics in a way that anyone can enjoy.
The Logic of Life presents an X-ray image of human life, stripping away the surface to show us a picture that is revealing, enthralling, and sometimes disturbing. The stories that emerge are not about data or equations but about people: the athlete who survived a shocking murder attempt, the computer geek who beat the hard-bitten poker pros, the economist who defied Henry Kissinger and faked an invasion of Berlin, the king who tried to buy off a revolution.
Once you’ve read this quotable and addictive book, life will never look the same again.
From the Hardcover edition.
Looking for an easily accessible overview of research methods inpsychology? This is the book for you! Whether you need to get aheadin class, you're pressed for time, or you just want a take on atopic that's not covered in your textbook, Research Methods inPsychology For Dummies has you covered.
Written in plain English and packed with easy-to-followinstruction, this friendly guide takes the intimidation out of thesubject and tackles the fundamentals of psychology research in away that makes it approachable and comprehensible, no matter yourbackground. Inside, you'll find expert coverage of qualitative andquantitative research methods, including surveys, case studies,laboratory observations, tests and experiments—and muchmore.Serves as an excellent supplement to course textbooksProvides a clear introduction to the scientific methodPresents the methodologies and techniques used in psychologyresearchWritten by the authors of Psychology Statistics ForDummies
If you're a first or second year psychology student and want tosupplement your doorstop-sized psychology textbook—and boostyour chances of scoring higher at exam time—this hands-onguide breaks down the subject into easily digestible bits andpropels you towards success.
Drawing on examples from across the social sciences, this book covers everything you need to know to plan, implement, and analyze the results of population-based survey experiments. But it is more than just a "how to" manual. This lively book challenges conventional wisdom about internal and external validity, showing why strong causal claims need not come at the expense of external validity, and how it is now possible to execute experiments remotely using large-scale population samples.
Designed for social scientists across the disciplines, Population-Based Survey Experiments provides the first complete introduction to this methodology.
Offers the most comprehensive treatment of the subject
Features a wealth of examples and practical advice
Reexamines issues of internal and external validity
Can be used in conjunction with downloadable data from ExperimentCentral.org for design and analysis exercises in the classroom
In the age of Big Data we often believe that our predictions about the future are better than ever before. But as risk expert Gerd Gigerenzer shows, the surprising truth is that in the real world, we often get better results by using simple rules and considering less information.
In Risk Savvy, Gigerenzer reveals that most of us, including doctors, lawyers, financial advisers, and elected officials, misunderstand statistics much more often than we think, leaving us not only misinformed, but vulnerable to exploitation. Yet there is hope. Anyone can learn to make better decisions for their health, finances, family, and business without needing to consult an expert or a super computer, and Gigerenzer shows us how.
Risk Savvy is an insightful and easy-to-understand remedy to our collective information overload and an essential guide to making smart, confident decisions in the face of uncertainty.
· Downloadable data sets
· Library of computer programs in SAS, SPSS, Stata, HLM, MLwiN, and more
· Additional material for data analysis
Features of the Fourth Edition include:New material on sample size calculations for chance-corrected agreement coefficients, as well as for intraclass correlation coefficients. The researcher will be able to determine the optimal number raters, subjects, and trials per subject.The chapter entitled “Benchmarking Inter-Rater Reliability Coefficients” has been entirely rewritten.The introductory chapter has been substantially expanded to explore possible definitions of the notion of inter-rater reliability.All chapters have been revised to a large extent to improve their readability.
Shortlisted for the British Book Design and Production Awards 2016
Shortlisted for the Association of Learned & Professional Society Publishers Award for Innovation in Publishing 2016
An Adventure in Statistics: The Reality Enigma by best-selling author and award-winning teacher Andy Field offers a better way to learn statistics. It combines rock-solid statistics coverage with compelling visual story-telling to address the conceptual difficulties that students learning statistics for the first time often encounter in introductory courses - guiding students away from rote memorization and toward critical thinking and problem solving. Field masterfully weaves in a unique, action-packed story starring Zach, a character who thinks like a student, processing information, and the challenges of understanding it, in the same way a statistics novice would. Illustrated with stunning graphic novel-style art and featuring Socratic dialogue, the story captivates readers as it introduces them to concepts, eliminating potential statistics anxiety.
The book assumes no previous statistics knowledge nor does it require the use of data analysis software. It covers the material you would expect for an introductory level statistics course that Field’s other books (Discovering Statistics Using IBM SPSS Statistics and Discovering Statistics Using R) only touch on, but with a contemporary twist, laying down strong foundations for understanding classical and Bayesian approaches to data analysis.
In doing so, it provides an unrivalled launch pad to further study, research, and inquisitiveness about the real world, equipping students with the skills to succeed in their chosen degree and which they can go on to apply in the workplace.
The Story and Main Characters
The Reality Revolution
In the City of Elpis, in the year 2100, there has been a reality revolution. Prior to the revolution, Elpis citizens were unable to see their flaws and limitations, believing themselves talented and special. This led to a self-absorbed society in which hard work and the collective good were undervalued and eroded.
To combat this, Professor Milton Grey invented the reality prism, a hat that allowed its wearers to see themselves as they really were - flaws and all. Faced with the truth, Elpis citizens revolted and destroyed and banned all reality prisms.
The Mysterious Disappearance
Zach and Alice are born soon after all the prisms have been destroyed. Zach, a musician who doesn’t understand science, and Alice, a geneticist who is also a whiz at statistics, are in love. One night, after making a world-changing discovery, Alice suddenly disappears, leaving behind a song playing on a loop and a file with her research on it.
Statistics to the Rescue!
Sensing that she might be in danger, Zach follows the clues to find her, as he realizes that the key to discovering why Alice has vanished is in her research. Alas! He must learn statistics and apply what he learns in order to overcome a number of deadly challenges and find the love of his life.
As Zach and his pocket watch, The Head, embark on their quest to find Alice, they meet Professor Milton Grey and Celia, battle zombies, cross a probability bridge, and encounter Jig:Saw, a mysterious corporation that might have something to do with Alice’s disappearance…
Author News"Eight years ago I had the idea to write a fictional story through which the student learns statistics via a shared adventure with the main character..." Read the complete article from Andy Field on writing his new book Times Higher Education article: “Andy Field takes statistics adventure to a new level”
Connect with us on Facebook and share your experiences with Andy’s texts, check out news, access free stuff, see photos, watch videos, learn about competitions, and much more.
Go behind the scenes and learn more about the man behind the book:Watch Andy talk about why he created a statistics book using the framework of a novel and illustrations by one of the illustrators for the show, Doctor Who. See more videos on Andy’s YouTube channel
Available with Perusall—an eBook that makes it easier to prepare for class
Perusall is an award-winning eBook platform featuring social annotation tools that allow students and instructors to collaboratively mark up and discuss their SAGE textbook. Backed by research and supported by technological innovations developed at Harvard University, this process of learning through collaborative annotation keeps your students engaged and makes teaching easier and more effective. Learn more.
New features in the fourth edition include:
sets of work problems in each chapter with detailed solutions and additional problems online to help students test their understanding of the material,
new "Worked Examples" to walk students through how to calculate and interpret the statistics featured in each chapter,
new examples from the author’s own data and from published research and the popular media to help students see how statistics are applied and written about in professional publications,
many more examples, tables, and charts to help students visualize key concepts, clarify concepts, and demonstrate how the statistics are used in the real world.
a more logical flow, with correlation directly preceding regression, and a combined glossary appearing at the end of the book,
a Quick Guide to Statistics, Formulas, and Degrees of Freedom at the start of the book, plainly outlining each statistic and when students should use them,
greater emphasis on (and description of) effect size and confidence interval reporting, reflecting their growing importance in research across the social science disciplines
an expanded website at www.routledge.com/cw/urdan with PowerPoint presentations, chapter summaries, a new test bank, interactive problems and detailed solutions to the text’s work problems, SPSS datasets for practice, links to useful tools and resources, and videos showing how to calculate statistics, how to calculate and interpret the appendices, and how to understand some of the more confusing tables of output produced by SPSS.
Statistics in Plain English, Fourth Editionis an ideal guide for statistics, research methods, and/or for courses that use statistics taught at the undergraduate or graduate level, or as a reference tool for anyone interested in refreshing their memory about key statistical concepts. The research examples are from psychology, education, and other social and behavioral sciences.
Drawing on their hugely popular BBC Radio 4 show More or Less, journalist Michael Blastland and internationally known economist Andrew Dilnot delight, amuse, and convert American mathphobes by showing how our everyday experiences make sense of numbers.
The radical premise of The Numbers Game is to show how much we already know and give practical ways to use our knowledge to become cannier consumers of the media. If you've ever wondered what "average" really means, whether the scare stories about cancer risk should convince you to change your behavior, or whether a story you read in the paper is biased (and how), you need this book. Blastland and Dilnot show how to survive and thrive on the torrent of numbers that pours through everyday life.
Theodore R. Marmor teaches politics and public policy in Yale University's management and law schools as well as in its political science department. He is also the author of Understanding Health Care Reform and coauthor of America's Misunderstood Welfare State.
New to This Edition
*Chapters on using each type of analysis with multicategorical antecedent variables.
*Example analyses using PROCESS v3, with annotated outputs throughout the book.
*More tips and advice, including new or revised discussions of formally testing moderation of a mechanism using the index of moderated mediation; effect size in mediation analysis; comparing conditional effects in models with more than one moderator; using R code for visualizing interactions; distinguishing between testing interaction and probing it; and more.
*Rewritten Appendix A, which provides the only documentation of PROCESS v3, including 13 new preprogrammed models that combine moderation with serial mediation or parallel and serial mediation.
*Appendix B, describing how to create customized models in PROCESS v3 or edit preprogrammed models.
As Nobel Prize–winning economist Ronald Coase once cynically observed, “If you torture data long enough, it will confess.” Lying with statistics is a time-honored con. In Standard Deviations, economics professor Gary Smith walks us through the various tricks and traps that people use to back up their own crackpot theories. Sometimes, the unscrupulous deliberately try to mislead us. Other times, the well-intentioned are blissfully unaware of the mischief they are committing. Today, data is so plentiful that researchers spend precious little time distinguishing between good, meaningful indicators and total rubbish. Not only do others use data to fool us, we fool ourselves.
With the breakout success of Nate Silver’s The Signal and the Noise, the once humdrum subject of statistics has never been hotter. Drawing on breakthrough research in behavioral economics by luminaries like Daniel Kahneman and Dan Ariely and taking to task some of the conclusions of Freakonomics author Steven D. Levitt, Standard Deviations demystifies the science behind statistics and makes it easy to spot the fraud all around.
London Times Book of the Week (2014)
-Illustrative examples using Mplus 7.4 include conceptual figures, Mplus program syntax, and an interpretation of results to show readers how to carry out the analyses with actual data.
-Exercises with an answer key allow readers to practice the skills they learn.
-Applications to a variety of disciplines appeal to those in the behavioral, social, political, educational, occupational, business, and health sciences.
-Data files for all the illustrative examples and exercises at www.routledge.com/9781138925151 allow readers to test their understanding of the concepts.
-Point to Rememberboxes aid in reader comprehension or provide in-depth discussions of key statistical or theoretical concepts.
Part 1 introduces basic structural equation modeling (SEM) as well as first- and second-order growth curve modeling. The book opens with the basic concepts from SEM, possible extensions of conventional growth curve models, and the data and measures used throughout the book. The subsequent chapters in part 1 explain the extensions. Chapter 2 introduces conventional modeling of multidimensional panel data, including confirmatory factor analysis (CFA) and growth curve modeling, and its limitations. The logical and theoretical extension of a CFA to a second-order growth curve, known as curve-of-factors model (CFM), are explained in Chapter 3. Chapter 4 illustrates the estimation and interpretation of unconditional and conditional CFMs. Chapter 5 presents the logical and theoretical extension of a parallel process model to a second-order growth curve, known as factor-of-curves model (FCM). Chapter 6 illustrates the estimation and interpretation of unconditional and conditional FCMs. Part 2 reviews growth mixture modeling including unconditional growth mixture modeling (Ch. 7) and conditional growth mixture models (Ch. 8). How to extend second-order growth curves (curve-of-factors and factor-of-curves models) to growth mixture models is highlighted in Chapter 9.
Ideal as a supplement for use in graduate courses on (advanced) structural equation, multilevel, longitudinal, or latent variable modeling, latent growth curve and mixture modeling, factor analysis, multivariate statistics, or advanced quantitative techniques (methods) taught in psychology, human development and family studies, business, education, health, and social sciences, this book’s practical approach also appeals to researchers. Prerequisites include a basic knowledge of intermediate statistics and structural equation modeling.
Drawing from history, mythology, literature, pop culture, and practical experience, Ciulla probes the many meanings of work or its meaninglessness and asks:
Why are so many of us letting work take over our lives and trying to live in what little time is left?
What has happened to the old, unspoken contract between worker and employer?
Why are young people not being disloyal when they regularly consider job-changing?
Employers can't promise as much to workers as before. Is that because they promise so much to stockholders?
Why are there mass layoffs and "downsizing" in a time of unequaled corporate prosperity? And why are the most common lies in business about satisfactory employee performance?
The traditional contract between employers and employees is over. This thoughtful and provocative study shows how to replace it by the one we make with ourselves.
From the Hardcover edition.
Where do these statistics originate? How accurate are they? Poor Numbers is the first analysis of the production and use of African economic development statistics. Morten Jerven's research shows how the statistical capacities of sub-Saharan African economies have fallen into disarray. The numbers substantially misstate the actual state of affairs. As a result, scarce resources are misapplied. Development policy does not deliver the benefits expected. Policymakers' attempts to improve the lot of the citizenry are frustrated. Donors have no accurate sense of the impact of the aid they supply. Jerven's findings from sub-Saharan Africa have far-reaching implications for aid and development policy. As Jerven notes, the current catchphrase in the development community is "evidence-based policy," and scholars are applying increasingly sophisticated econometric methods-but no statistical techniques can substitute for partial and unreliable data.
"The book makes a valuable contribution by synthesizing currentresearch and identifying areas for future investigation for eachaspect of the survey process."
—Journal of the American Statistical Association
"Overall, the high quality of the text material is matched bythe quality of writing . . ."
—Public Opinion Quarterly
". . . it should find an audience everywhere surveys are beingconducted."
This new edition of Survey Methodology continues toprovide a state-of-the-science presentation of essential surveymethodology topics and techniques. The volume's six world-renownedauthors have updated this Second Edition to present newly emergingapproaches to survey research and provide more comprehensivecoverage of the major considerations in designing and conducting asample survey.
Key topics in survey methodology are clearly explained in thebook's chapters, with coverage including sampling frame evaluation,sample design, development of questionnaires, evaluation ofquestions, alternative modes of data collection, interviewing,nonresponse, post-collection processing of survey data, andpractices for maintaining scientific integrity. Acknowledging thegrowing advances in research and technology, the Second Editionfeatures:Updated explanations of sampling frame issues for mobiletelephone and web surveys
New scientific insight on the relationship between nonresponserates and nonresponse errors
Restructured discussion of ethical issues in survey research,emphasizing the growing research results on privacy, informedconsent, and confidentiality issues
The latest research findings on effective questionnairedevelopment techniques
The addition of 50% more exercises at the end of each chapter,illustrating basic principles of survey design
An expanded FAQ chapter that addresses the concerns thataccompany newly established methods
Providing valuable and informative perspectives on the mostmodern methods in the field, Survey Methodology, SecondEdition is an ideal book for survey research courses at theupper-undergraduate and graduate levels. It is also anindispensable reference for practicing survey methodologists andany professional who employs survey research methods.
Updated throughout, the second edition features three new chapters—growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group issues (analyzing growth in multiple populations, accelerated designs, and multi-level longitudinal approaches), and then special topics such as missing data models, LGM power and Monte Carlo estimation, and latent growth interaction models. The model specifications previously included in the appendices are now available on the CD so the reader can more easily adapt the models to their own research.
This practical guide is ideal for a wide range of social and behavioral researchers interested in the measurement of change over time, including social, developmental, organizational, educational, consumer, personality and clinical psychologists, sociologists, and quantitative methodologists, as well as for a text on latent variable growth curve modeling or as a supplement for a course on multivariate statistics. A prerequisite of graduate level statistics is recommended.
The book is intended for students, practitioners, and researchers in fields such as survey and market research, psychological research, official statistics and customer satisfaction research.
New to This Edition
*Extensively revised to cover important new topics: Pearl's graphing theory and the SCM, causal inference frameworks, conditional process modeling, path models for longitudinal data, item response theory, and more.
*Chapters on best practices in all stages of SEM, measurement invariance in confirmatory factor analysis, and significance testing issues and bootstrapping.
*Expanded coverage of psychometrics.
*Additional computer tools: online files for all detailed examples, previously provided in EQS, LISREL, and Mplus, are now also given in Amos, Stata, and R (lavaan).
*Reorganized to cover the specification, identification, and analysis of observed variable models separately from latent variable models.
*Exercises with answers, plus end-of-chapter annotated lists of further reading.
*Real examples of troublesome data, demonstrating how to handle typical problems in analyses.
*Topic boxes on specialized issues, such as causes of nonpositive definite correlations.
*Boxed rules to remember.
*Website promoting a learn-by-doing approach, including syntax and data files for six widely used SEM computer tools.
This book is ideal for advanced undergraduates, graduate students, and researchers in the social sciences who need to understand and use relatively advanced statistical methods but whose mathematical preparation for this work is insufficient.
Learn more about "The Little Green Book" QASS Series! Click Here.
- Richard Harris, Professor of Quantitative Social Science, University of Bristol
R is a powerful open source computing tool that supports geographical analysis and mapping for the many geography and ‘non-geography’ students and researchers interested in spatial analysis and mapping.
This book provides an introduction to the use of R for spatial statistical analysis, geocomputation and the analysis of geographical information for researchers collecting and using data with location attached, largely through increased GPS functionality.
Brunsdon and Comber take readers from ‘zero to hero’ in spatial analysis and mapping through functions they have developed and compiled into R packages. This enables practical R applications in GIS, spatial analyses, spatial statistics, mapping, and web-scraping. Each chapter includes: Example data and commands for exploring it Scripts and coding to exemplify specific functionality Advice for developing greater understanding - through functions such as locator(), View(), and alternative coding to achieve the same ends Self-contained exercises for students to work through Embedded code within the descriptive text.
This is a definitive 'how to' that takes students - of any discipline - from coding to actual applications and uses of R.
*Checklists of key words and formulas in every chapter.
*Examples of SPSS screenshots used for analyzing data.
*Cautionary notes plus "Putting It All Together" section recaps.
*End-of-chapter self-quizzes (with full answers and explanations).
*Glossary of terms.
Data Analysisalso describes how the model comparison approach and uniform framework can be applied to models that include product predictors (i.e., interactions and nonlinear effects) and to observations that are nonindependent. Indeed, the analysis of nonindependent observations is treated in some detail, including models of nonindependent data with continuously varying predictors as well as standard repeated measures analysis of variance. This approach also provides an integrated introduction to multilevel or hierarchical linear models and logistic regression. Finally, Data Analysis provides guidance for the treatment of outliers and other problematic aspects of data analysis. It is intended for advanced undergraduate and graduate level courses in data analysis and offers an integrated approach that is very accessible and easy to teach.
Highlights of the third edition include:
a new chapter on logistic regression;
expanded treatment of mixed models for data with multiple random factors;
an enhanced website with PowerPoint presentations and other tools that demonstrate the concepts in the book; exercises for each chapter that highlight research findings from the literature; data sets, R code, and SAS output for all analyses; additional examples and problem sets; and test questions.