## Similar

Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability.

The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introduction, historical background, theory and applications, algorithms, and exercises. The Handbook of Probability offers coverage of:

The Handbook of Probability is an ideal resource for researchers and practitioners in numerous fields, such as mathematics, statistics, operations research, engineering, medicine, and finance, as well as a useful text for graduate students.

Introducing new and established mathematical foundations necessary to analyze realistic market models and scenarios, the handbook begins with a presentation of the dynamics and complexity of futures and derivatives markets as well as a portfolio optimization problem using quantum computers. Subsequently, the handbook addresses estimating complex model parameters using high-frequency data. Finally, the handbook focuses on the links between models used in financial markets and models used in other research areas such as geophysics, fossil records, and earthquake studies. The Handbook of High-Frequency Trading and Modeling in Finance also features:

• Contributions by well-known experts within the academic, industrial, and regulatory fields

• A well-structured outline on the various data analysis methodologies used to identify new trading opportunities

• Newly emerging quantitative tools that address growing concerns relating to high-frequency data such as stochastic volatility and volatility tracking; stochastic jump processes for limit-order books and broader market indicators; and options markets

• Practical applications using real-world data to help readers better understand the presented material

The Handbook of High-Frequency Trading and Modeling in Finance is an excellent reference for professionals in the fields of business, applied statistics, econometrics, and financial engineering. The handbook is also a good supplement for graduate and MBA-level courses on quantitative finance, volatility, and financial econometrics.

Ionut Florescu, PhD, is Research Associate Professor in Financial Engineering and Director of the Hanlon Financial Systems Laboratory at Stevens Institute of Technology. His research interests include stochastic volatility, stochastic partial differential equations, Monte Carlo Methods, and numerical methods for stochastic processes. Dr. Florescu is the author of Probability and Stochastic Processes, the coauthor of Handbook of Probability, and the coeditor of Handbook of Modeling High-Frequency Data in Finance, all published by Wiley.

Maria C. Mariani, PhD, is Shigeko K. Chan Distinguished Professor in Mathematical Sciences and Chair of the Department of Mathematical Sciences at The University of Texas at El Paso. Her research interests include mathematical finance, applied mathematics, geophysics, nonlinear and stochastic partial differential equations and numerical methods. Dr. Mariani is the coeditor of Handbook of Modeling High-Frequency Data in Finance, also published by Wiley.

H. Eugene Stanley, PhD, is William Fairfield Warren Distinguished Professor at Boston University. Stanley is one of the key founders of the new interdisciplinary field of econophysics, and has an ISI Hirsch index H=128 based on more than 1200 papers. In 2004 he was elected to the National Academy of Sciences.

Frederi G. Viens, PhD, is Professor of Statistics and Mathematics and Director of the Computational Finance Program at Purdue University. He holds more than two dozen local, regional, and national awards and he travels extensively on a world-wide basis to deliver lectures on his research interests, which range from quantitative finance to climate science and agricultural economics. A Fellow of the Institute of Mathematics Statistics, Dr. Viens is the coeditor of Handbook of Modeling High-Frequency Data in Finance, also published by Wiley.

With a sophisticated approach, Probability and Stochastic Processes successfully balances theory and applications in a pedagogical and accessible format. The book’s primary focus is on key theoretical notions in probability to provide a foundation for understanding concepts and examples related to stochastic processes.

Organized into two main sections, the book begins by developing probability theory with topical coverage on probability measure; random variables; integration theory; product spaces, conditional distribution, and conditional expectations; and limit theorems. The second part explores stochastic processes and related concepts including the Poisson process, renewal processes, Markov chains, semi-Markov processes, martingales, and Brownian motion. Featuring a logical combination of traditional and complex theories as well as practices, Probability and Stochastic Processes also includes:

Multiple examples from disciplines such as business, mathematical finance, and engineering Chapter-by-chapter exercises and examples to allow readers to test their comprehension of the presented material A rigorous treatment of all probability and stochastic processes concepts

An appropriate textbook for probability and stochastic processes courses at the upper-undergraduate and graduate level in mathematics, business, and electrical engineering, Probability and Stochastic Processes is also an ideal reference for researchers and practitioners in the fields of mathematics, engineering, and finance.

In recent years, the availability of high-frequency data and advances in computing have allowed financial practitioners to design systems that can handle and analyze this information. Handbook of Modeling High-Frequency Data in Finance addresses the many theoretical and practical questions raised by the nature and intrinsic properties of this data.

A one-stop compilation of empirical and analytical research, this handbook explores data sampled with high-frequency finance in financial engineering, statistics, and the modern financial business arena. Every chapter uses real-world examples to present new, original, and relevant topics that relate to newly evolving discoveries in high-frequency finance, such as:

Designing new methodology to discover elasticity and plasticity of price evolution

Constructing microstructure simulation models

Calculation of option prices in the presence of jumps and transaction costs

Using boosting for financial analysis and trading

The handbook motivates practitioners to apply high-frequency finance to real-world situations by including exclusive topics such as risk measurement and management, UHF data, microstructure, dynamic multi-period optimization, mortgage data models, hybrid Monte Carlo, retirement, trading systems and forecasting, pricing, and boosting. The diverse topics and viewpoints presented in each chapter ensure that readers are supplied with a wide treatment of practical methods.

Handbook of Modeling High-Frequency Data in Finance is an essential reference for academics and practitioners in finance, business, and econometrics who work with high-frequency data in their everyday work. It also serves as a supplement for risk management and high-frequency finance courses at the upper-undergraduate and graduate levels.

Introducing new and established mathematical foundations necessary to analyze realistic market models and scenarios, the handbook begins with a presentation of the dynamics and complexity of futures and derivatives markets as well as a portfolio optimization problem using quantum computers. Subsequently, the handbook addresses estimating complex model parameters using high-frequency data. Finally, the handbook focuses on the links between models used in financial markets and models used in other research areas such as geophysics, fossil records, and earthquake studies. The Handbook of High-Frequency Trading and Modeling in Finance also features:

• Contributions by well-known experts within the academic, industrial, and regulatory fields

• A well-structured outline on the various data analysis methodologies used to identify new trading opportunities

• Newly emerging quantitative tools that address growing concerns relating to high-frequency data such as stochastic volatility and volatility tracking; stochastic jump processes for limit-order books and broader market indicators; and options markets

• Practical applications using real-world data to help readers better understand the presented material

The Handbook of High-Frequency Trading and Modeling in Finance is an excellent reference for professionals in the fields of business, applied statistics, econometrics, and financial engineering. The handbook is also a good supplement for graduate and MBA-level courses on quantitative finance, volatility, and financial econometrics.

Ionut Florescu, PhD, is Research Associate Professor in Financial Engineering and Director of the Hanlon Financial Systems Laboratory at Stevens Institute of Technology. His research interests include stochastic volatility, stochastic partial differential equations, Monte Carlo Methods, and numerical methods for stochastic processes. Dr. Florescu is the author of Probability and Stochastic Processes, the coauthor of Handbook of Probability, and the coeditor of Handbook of Modeling High-Frequency Data in Finance, all published by Wiley.

Maria C. Mariani, PhD, is Shigeko K. Chan Distinguished Professor in Mathematical Sciences and Chair of the Department of Mathematical Sciences at The University of Texas at El Paso. Her research interests include mathematical finance, applied mathematics, geophysics, nonlinear and stochastic partial differential equations and numerical methods. Dr. Mariani is the coeditor of Handbook of Modeling High-Frequency Data in Finance, also published by Wiley.

H. Eugene Stanley, PhD, is William Fairfield Warren Distinguished Professor at Boston University. Stanley is one of the key founders of the new interdisciplinary field of econophysics, and has an ISI Hirsch index H=128 based on more than 1200 papers. In 2004 he was elected to the National Academy of Sciences.

Frederi G. Viens, PhD, is Professor of Statistics and Mathematics and Director of the Computational Finance Program at Purdue University. He holds more than two dozen local, regional, and national awards and he travels extensively on a world-wide basis to deliver lectures on his research interests, which range from quantitative finance to climate science and agricultural economics. A Fellow of the Institute of Mathematics Statistics, Dr. Viens is the coeditor of Handbook of Modeling High-Frequency Data in Finance, also published by Wiley.

In Pursuit of the Traveling Salesman travels to the very threshold of our understanding about the nature of complexity, and challenges you yourself to discover the solution to this captivating mathematical problem.

Some images inside the book are unavailable due to digital copyright restrictions.

Social Media is huge - Nothing in the history of the world has brought people together and changed the face of business like social media has.

Reach out to the world and get them to like you.

Various professionals will find this book immensely useful, whether it be the industrial engineer, the industrial manager, or anyone associated with engineering in a technical or managing role. It will bring about a clear understanding of not only how to implement Six Sigma statistical tools, but also how to do so within the bounds of Lean manufacturing scheme. It will show how Lean Six Sigma can help reinforce the notion of “less is more, while at the same time preserving minimal error rates in final manufactured products.

Reviews the essential statistical tools upon which Six Sigma rests, including normal distribution and mean deviation and the derivation of 1 sigma through six sigmaExplains essential lean tools like Value-Stream Mapping and quality improvement tools like Kaizen techniques within the context of Lean Six Sigma practiceExtended case study to clearly demonstrate how Six Sigma and Lean principles have been actually implemented, reducing production times and costs and creating improved product qualityThe 21 self-contained chapters in this volume are devoted to the examination of modern trends and open problems in the field of optimization. This book will be a valuable tool not only to specialists interested in the technical detail and various applications presented, but also to researchers interested in building upon the book’s theoretical results.

The Second Edition is completely revised and provides additional review material on linear algebra as well as complete coverage of elementary linear programming. Other topics covered include: the Duality Theorem; transportation problems; the assignment problem; and the maximal flow problem. New figures and exercises are provided and the authors have updated all computer applications.

More review material on linear algebraElementary linear programming covered more efficientlyPresentation improved, especially for the duality theorem, transportation problems, the assignment problem, and the maximal flow problemNew figures and exercisesComputer applications updatedNew guide to inexpensive linear programming software for personal computersThe author presents the first extended treatment of MM algorithms, which are ideal for high-dimensional optimization problems in data mining, imaging, and genomics; derives numerous algorithms from a broad diversity of application areas, with a particular emphasis on statistics, biology, and data mining; and summarizes a large amount of literature that has not reached book form before.

New York Times Bestseller

“Not so different in spirit from the way public intellectuals like John Kenneth Galbraith once shaped discussions of economic policy and public figures like Walter Cronkite helped sway opinion on the Vietnam War…could turn out to be one of the more momentous books of the decade.”

—New York Times Book Review

"Nate Silver's The Signal and the Noise is The Soul of a New Machine for the 21st century."

—Rachel Maddow, author of Drift

"A serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver's coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism."

—New York Review of Books

Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. He solidified his standing as the nation's foremost political forecaster with his near perfect prediction of the 2012 election. Silver is the founder and editor in chief of FiveThirtyEight.com.

Drawing on his own groundbreaking work, Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.

In keeping with his own aim to seek truth from data, Silver visits the most successful forecasters in a range of areas, from hurricanes to baseball, from the poker table to the stock market, from Capitol Hill to the NBA. He explains and evaluates how these forecasters think and what bonds they share. What lies behind their success? Are they good—or just lucky? What patterns have they unraveled? And are their forecasts really right? He explores unanticipated commonalities and exposes unexpected juxtapositions. And sometimes, it is not so much how good a prediction is in an absolute sense that matters but how good it is relative to the competition. In other cases, prediction is still a very rudimentary—and dangerous—science.

Silver observes that the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.

With everything from the health of the global economy to our ability to fight terrorism dependent on the quality of our predictions, Nate Silver’s insights are an essential read.

From the Trade Paperback edition.

For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

There is a selected solutions manual for instructors for the new edition.

The Essentials For Dummies Series

Dummies is proud to present our new series, The Essentials For Dummies. Now students who are prepping for exams, preparing to study new material, or who just need a refresher can have a concise, easy-to-understand review guide that covers an entire course by concentrating solely on the most important concepts. From algebra and chemistry to grammar and Spanish, our expert authors focus on the skills students most need to succeed in a subject.

The fun and easy way to get down to business with statistics

Stymied by statistics? No fear? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life.

Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.

Tracks to a typical first semester statistics course Updated examples resonate with today's students Explanations mirror teaching methods and classroom protocolPacked with practical advice and real-world problems, Statistics For Dummies gives you everything you need to analyze and interpret data for improved classroom or on-the-job performance.

For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.

And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.

Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all types, the author discusses the classical theory of minima and maxima (Chapter 2). In Chapter 3, necessary and sufficient conditions for relative extrema of functionals are developed from the viewpoint of the Euler-Lagrange formalism of the calculus of variations. Chapter 4 is restricted to linear time-invariant systems for which significant results can be obtained via transform methods with a minimum of computational difficulty. In Chapter 5, emphasis is placed on applied problems which can be converted to a standard problem form for linear programming solutions, with the fundamentals of convex sets and simplex technique for solution given detailed attention. Chapter 6 examines search techniques and nonlinear programming. Chapter 7 covers Bellman's principle of optimality, and finally, Chapter 8 gives valuable insight into the maximum principle extension of the classical calculus of variations.

Designed for use in a first course in optimization for advanced undergraduates, graduate students, practicing engineers, and systems designers, this carefully written text is accessible to anyone with a background in basic differential equation theory and matrix operations. To help students grasp the material, the book contains many detailed examples and problems, and also includes reference sections for additional reading.

This new edition contains computational exercises in the form of case studies which help understanding optimization methods beyond their theoretical, description, when coming to actual implementation. Besides, the nonsmooth optimization part has been substantially reorganized and expanded.

These may not sound like typical questions for an econo-mist to ask. But Steven D. Levitt is not a typical economist. He is a much-heralded scholar who studies the riddles of everyday life—from cheating and crime to sports and child-rearing—and whose conclusions turn conventional wisdom on its head.

Freakonomics is a groundbreaking collaboration between Levitt and Stephen J. Dubner, an award-winning author and journalist. They usually begin with a mountain of data and a simple question. Some of these questions concern life-and-death issues; others have an admittedly freakish quality. Thus the new field of study contained in this book: freakonomics.

Through forceful storytelling and wry insight, Levitt and Dubner show that economics is, at root, the study of incentives—how people get what they want, or need, especially when other people want or need the same thing. In Freakonomics, they explore the hidden side of . . . well, everything. The inner workings of a crack gang. The truth about real-estate agents. The myths of campaign finance. The telltale marks of a cheating schoolteacher. The secrets of the Klu Klux Klan.

What unites all these stories is a belief that the modern world, despite a great deal of complexity and downright deceit, is not impenetrable, is not unknowable, and—if the right questions are asked—is even more intriguing than we think. All it takes is a new way of looking.

Freakonomics establishes this unconventional premise: If morality represents how we would like the world to work, then economics represents how it actually does work. It is true that readers of this book will be armed with enough riddles and stories to last a thousand cocktail parties. But Freakonomics can provide more than that. It will literally redefine the way we view the modern world.

Bonus material added to the revised and expanded 2006 edition

The original New York Times Magazine article about Steven D. Levitt by Stephen J. Dubner, which led to the creation of this book.Seven “Freakonomics” columns written for the New York Times Magazine, published between August 2005 and April 2006.Selected entries from the Freakonomics blog, posted between April 2005 and May 2006 at http://www.freakonomics.com/blog/.One is heuristic and nonrigorous, and attempts to develop in students an intuitive feel for the subject that enables him or her to think probabilistically. The other approach attempts a rigorous development of probability by using the tools of measure theory. The first approach is employed in this text.

The book begins by introducing basic concepts of probability theory, such as the random variable, conditional probability, and conditional expectation. This is followed by discussions of stochastic processes, including Markov chains and Poison processes. The remaining chapters cover queuing, reliability theory, Brownian motion, and simulation. Many examples are worked out throughout the text, along with exercises to be solved by students.

This book will be particularly useful to those interested in learning how probability theory can be applied to the study of phenomena in fields such as engineering, computer science, management science, the physical and social sciences, and operations research. Ideally, this text would be used in a one-year course in probability models, or a one-semester course in introductory probability theory or a course in elementary stochastic processes.

New to this Edition:

65% new chapter material including coverage of finite capacity queues, insurance risk models and Markov chainsContains compulsory material for new Exam 3 of the Society of Actuaries containing several sections in the new examsUpdated data, and a list of commonly used notations and equations, a robust ancillary package, including a ISM, SSM, and test bankIncludes SPSS PASW Modeler and SAS JMP software packages which are widely used in the field

Hallmark features:

Superior writing styleExcellent exercises and examples covering the wide breadth of coverage of probability topics Real-world applications in engineering, science, business and economics

The author begins with basic characteristics of financial time series data before covering three main topics:

Analysis and application of univariate financial time series The return series of multiple assets Bayesian inference in finance methodsKey features of the new edition include additional coverage of modern day topics such as arbitrage, pair trading, realized volatility, and credit risk modeling; a smooth transition from S-Plus to R; and expanded empirical financial data sets.

The overall objective of the book is to provide some knowledge of financial time series, introduce some statistical tools useful for analyzing these series and gain experience in financial applications of various econometric methods.

1,001 Statistics Practice Problems For Dummies takes you beyond the instruction and guidance offered in Statistics For Dummies to give you a more hands-on understanding of statistics. The practice problems offered range in difficulty, including detailed explanations and walk-throughs.

In this series, every step of every solution is shown with explanations and detailed narratives to help you solve each problem. With the book purchase, you’ll also get access to practice statistics problems online. This content features 1,001 practice problems presented in multiple choice format; on-the-go access from smart phones, computers, and tablets; customizable practice sets for self-directed study; practice problems categorized as easy, medium, or hard; and a one-year subscription with book purchase.

Offers on-the-go access to practice statistics problems Gives you friendly, hands-on instruction 1,001 statistics practice problems that range in difficulty1,001 Statistics Practice Problems For Dummies provides ample practice opportunities for students who may have taken statistics in high school and want to review the most important concepts as they gear up for a faster-paced college class.

". . . [this book] should be on the shelf of everyone interested in . . . longitudinal data analysis."

—Journal of the American Statistical Association

Features newly developed topics and applications of the analysis of longitudinal data

Applied Longitudinal Analysis, Second Edition presents modern methods for analyzing data from longitudinal studies and now features the latest state-of-the-art techniques. The book emphasizes practical, rather than theoretical, aspects of methods for the analysis of diverse types of longitudinal data that can be applied across various fields of study, from the health and medical sciences to the social and behavioral sciences.

The authors incorporate their extensive academic and research experience along with various updates that have been made in response to reader feedback. The Second Edition features six newly added chapters that explore topics currently evolving in the field, including:

Fixed effects and mixed effects models Marginal models and generalized estimating equations Approximate methods for generalized linear mixed effects models Multiple imputation and inverse probability weighted methods Smoothing methods for longitudinal data Sample size and powerEach chapter presents methods in the setting of applications to data sets drawn from the health sciences. New problem sets have been added to many chapters, and a related website features sample programs and computer output using SAS, Stata, and R, as well as data sets and supplemental slides to facilitate a complete understanding of the material.

With its strong emphasis on multidisciplinary applications and the interpretation of results, Applied Longitudinal Analysis, Second Edition is an excellent book for courses on statistics in the health and medical sciences at the upper-undergraduate and graduate levels. The book also serves as a valuable reference for researchers and professionals in the medical, public health, and pharmaceutical fields as well as those in social and behavioral sciences who would like to learn more about analyzing longitudinal data.

“This book should be an essential part of the personal library of every practicing statistician.”—Technometrics

Thoroughly revised and updated, the new edition of Nonparametric Statistical Methods includes additional modern topics and procedures, more practical data sets, and new problems from real-life situations. The book continues to emphasize the importance of nonparametric methods as a significant branch of modern statistics and equips readers with the conceptual and technical skills necessary to select and apply the appropriate procedures for any given situation.

Written by leading statisticians, Nonparametric Statistical Methods, Third Edition provides readers with crucial nonparametric techniques in a variety of settings, emphasizing the assumptions underlying the methods. The book provides an extensive array of examples that clearly illustrate how to use nonparametric approaches for handling one- or two-sample location and dispersion problems, dichotomous data, and one-way and two-way layout problems. In addition, the Third Edition features:

The use of the freely available R software to aid in computation and simulation, including many new R programs written explicitly for this new edition New chapters that address density estimation, wavelets, smoothing, ranked set sampling, and Bayesian nonparametrics Problems that illustrate examples from agricultural science, astronomy, biology, criminology, education, engineering, environmental science, geology, home economics, medicine, oceanography, physics, psychology, sociology, and space science Nonparametric Statistical Methods, Third Edition is an excellent reference for applied statisticians and practitioners who seek a review of nonparametric methods and their relevant applications. The book is also an ideal textbook for upper-undergraduate and first-year graduate courses in applied nonparametric statistics.This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables.

Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-the-art techniques for building, interpreting, and assessing the performance of LR models. New and updated features include:

A chapter on the analysis of correlated outcome data A wealth of additional material for topics ranging from Bayesian methods to assessing model fit Rich data sets from real-world studies that demonstrate each method under discussion Detailed examples and interpretation of the presented results as well as exercises throughoutApplied Logistic Regression, Third Edition is a must-have guide for professionals and researchers who need to model nominal or ordinal scaled outcome variables in public health, medicine, and the social sciences as well as a wide range of other fields and disciplines.

"Seamless R and C++ integration with Rcpp" is simply a wonderful book. For anyone who uses C/C++ and R, it is an indispensable resource. The writing is outstanding. A huge bonus is the section on applications. This section covers the matrix packages Armadillo and Eigen and the GNU Scientific Library as well as RInside which enables you to use R inside C++. These applications are what most of us need to know to really do scientific programming with R and C++. I love this book. -- Robert McCulloch, University of Chicago Booth School of Business

Rcpp is now considered an essential package for anybody doing serious computational research using R. Dirk's book is an excellent companion and takes the reader from a gentle introduction to more advanced applications via numerous examples and efficiency enhancing gems. The book is packed with all you might have ever wanted to know about Rcpp, its cousins (RcppArmadillo, RcppEigen .etc.), modules, package development and sugar. Overall, this book is a must-have on your shelf. -- Sanjog Misra, UCLA Anderson School of Management

The Rcpp package represents a major leap forward for scientific computations with R. With very few lines of C++ code, one has R's data structures readily at hand for further computations in C++. Hence, high-level numerical programming can be made in C++ almost as easily as in R, but often with a substantial speed gain. Dirk is a crucial person in these developments, and his book takes the reader from the first fragile steps on to using the full Rcpp machinery. A very recommended book! -- Søren Højsgaard, Department of Mathematical Sciences, Aalborg University, Denmark

"Seamless R and C ++ Integration with Rcpp" provides the first comprehensive introduction to Rcpp. Rcpp has become the most widely-used language extension for R, and is deployed by over one-hundred different CRAN and BioConductor packages. Rcpp permits users to pass scalars, vectors, matrices, list or entire R objects back and forth between R and C++ with ease. This brings the depth of the R analysis framework together with the power, speed, and efficiency of C++.

Dirk Eddelbuettel has been a contributor to CRAN for over a decade and maintains around twenty packages. He is the Debian/Ubuntu maintainer for R and other quantitative software, edits the CRAN Task Views for Finance and High-Performance Computing, is a co-founder of the annual R/Finance conference, and an editor of the Journal of Statistical Software. He holds a Ph.D. in Mathematical Economics from EHESS (Paris), and works in Chicago as a Senior Quantitative Analyst.

The text begins with examinations of the allocation problem, matrix notation for dual problems, feasibility, and theorems on duality and existence. Subsequent chapters address convex sets and boundedness, the prepared problem and boundedness and consistency, optimal points and motivation of the simplex method, and the simplex method and tableaux. The treatment concludes with explorations of the effectiveness of the simplex method and the solution of the dual problem. Two helpful Appendixes offer supplementary material.

"It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006)

A complete and comprehensive classic in probability and measure theory

Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this Anniversary Edition builds on its strong foundation of measure theory and probability with Billingsley's unique writing style. In recognition of 35 years of publication, impacting tens of thousands of readers, this Anniversary Edition has been completely redesigned in a new, open and user-friendly way in order to appeal to university-level students.

This book adds a new foreward by Steve Lally of the Statistics Department at The University of Chicago in order to underscore the many years of successful publication and world-wide popularity and emphasize the educational value of this book. The Anniversary Edition contains features including:

An improved treatment of Brownian motion Replacement of queuing theory with ergodic theory Theory and applications used to illustrate real-life situations Over 300 problems with corresponding, intensive notes and solutions Updated bibliography An extensive supplement of additional notes on the problems and chapter commentariesPatrick Billingsley was a first-class, world-renowned authority in probability and measure theory at a leading U.S. institution of higher education. He continued to be an influential probability theorist until his unfortunate death in 2011. Billingsley earned his Bachelor's Degree in Engineering from the U.S. Naval Academy where he served as an officer. he went on to receive his Master's Degree and doctorate in Mathematics from Princeton University.Among his many professional awards was the Mathematical Association of America's Lester R. Ford Award for mathematical exposition. His achievements through his long and esteemed career have solidified Patrick Billingsley's place as a leading authority in the field and been a large reason for his books being regarded as classics.

This Anniversary Edition of Probability and Measure offers advanced students, scientists, and engineers an integrated introduction to measure theory and probability. Like the previous editions, this Anniversary Edition is a key resource for students of mathematics, statistics, economics, and a wide variety of disciplines that require a solid understanding of probability theory.

Machine Learning: Hands-On for Developers and Technical Professionals provides hands-on instruction and fully-coded working examples for the most common machine learning techniques used by developers and technical professionals. The book contains a breakdown of each ML variant, explaining how it works and how it is used within certain industries, allowing readers to incorporate the presented techniques into their own work as they follow along. A core tenant of machine learning is a strong focus on data preparation, and a full exploration of the various types of learning algorithms illustrates how the proper tools can help any developer extract information and insights from existing data. The book includes a full complement of Instructor's Materials to facilitate use in the classroom, making this resource useful for students and as a professional reference.

At its core, machine learning is a mathematical, algorithm-based technology that forms the basis of historical data mining and modern big data science. Scientific analysis of big data requires a working knowledge of machine learning, which forms predictions based on known properties learned from training data. Machine Learning is an accessible, comprehensive guide for the non-mathematician, providing clear guidance that allows readers to:

Learn the languages of machine learning including Hadoop, Mahout, and Weka Understand decision trees, Bayesian networks, and artificial neural networks Implement Association Rule, Real Time, and Batch learning Develop a strategic plan for safe, effective, and efficient machine learningBy learning to construct a system that can learn from data, readers can increase their utility across industries. Machine learning sits at the core of deep dive data analysis and visualization, which is increasingly in demand as companies discover the goldmine hiding in their existing data. For the tech professional involved in data science, Machine Learning: Hands-On for Developers and Technical Professionals provides the skills and techniques required to dig deeper.

This second edition covers several new topics: new section on capacity theory and elements of potential theory now includes the concepts of quasi-open sets and quasi-continuity; increased number of examples in the areas of linearized elasticity system, obstacles problems, convection-diffusion, and semilinear equations; new section on mass transportation problems and the Kantorovich relaxed formulation of the Monge problem; new subsection on stochastic homogenization establishes the mathematical tools coming from ergodic theory; and an entirely new and comprehensive chapter (17) devoted to gradient flows and the dynamical approach to equilibria.

The book is intended for Ph.D. students, researchers, and practitioners who want to approach the field of variational analysis in a systematic way.

This second edition covers several new topics: new section on capacity theory and elements of potential theory now includes the concepts of quasi-open sets and quasi-continuity; increased number of examples in the areas of linearized elasticity system, obstacles problems, convection-diffusion, and semilinear equations; new section on mass transportation problems and the Kantorovich relaxed formulation of the Monge problem; new subsection on stochastic homogenization establishes the mathematical tools coming from ergodic theory; and an entirely new and comprehensive chapter (17) devoted to gradient flows and the dynamical approach to equilibria.

The book is intended for Ph.D. students, researchers, and practitioners who want to approach the field of variational analysis in a systematic way.

The book consistently takes the point of view of focusing on one sample path of a stochastic process. Hence, it is devoted to providing pure sample-path arguments. With this approach it is possible to separate the issue of the validity of a relationship from issues of existence of limits and/or construction of stationary framework. Generally, in many cases of interest in queueing theory, relations hold, assuming limits exist, and the proofs are elementary and intuitive. In other cases, proofs of the existence of limits will require the heavy machinery of stochastic processes. The authors feel that sample-path analysis can be best used to provide general results that are independent of stochastic assumptions, complemented by use of probabilistic arguments to carry out a more detailed analysis. This book focuses on the first part of the picture. It does however, provide numerous examples that invoke stochastic assumptions, which typically are presented at the ends of the chapters.

The next step is to determine how best to store the data across multiple servers. This problem has been widely-studied in the literature of distributed and database systems. OSNs, however, represent a different class of data systems. When a user spends time on a social network, the data mostly requested is her own and that of her friends; e.g., in Facebook or Twitter, these data are the status updates posted by herself as well as that posted by the friends. This so-called social locality should be taken into account when determining the server locations to store these data, so that when a user issues a read request, all its relevant data can be returned quickly and efficiently. Social locality is not a design factor in traditional storage systems where data requests are always processed independently.

Even for today’s OSNs, social locality is not yet considered in their data partition schemes. These schemes rely on distributed hash tables (DHT), using consistent hashing to assign the users’ data to the servers. The random nature of DHT leads to weak social locality which has been shown to result in poor performance under heavy request loads.

Data Storage for Social Networks: A Socially Aware Approach is aimed at reviewing the current literature of data storage for online social networks and discussing new methods that take into account social awareness in designing efficient data storage.

The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows.

Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

Considerable coverage is given to existence and methods of finding periodic orbits and almost-periodic solutions, as well as to the description of the class of ergodic recurrent motions. There is further treatment of the perturbation method and the theory of time-independent and periodic perturbations in particular.

The theory developed here is applied to the construction and investigation of the neigbourhood of time-independent conditions for nonlinear systems of automatic control, and the control of charged particle beam in magnetic field. Some other specific problems are also solved such as after effect systems and orbit quantization.

Contents:Preliminary Representations and Analyses of Motion Family BehaviorOn Behavior of Trajectories in the Neighborhood of a Periodic OrbitNatural and Forced Oscillations in Systems with Many Degrees of FreedomMethods for Investigation and Construction of Stationary ModesOscillations in Nonlinear and Controlled SystemsAppendix: Theory of Rated Stability

Readership: Mathematicians and physicists.

keywords:Theory of Oscillations;Behavior of Integral Curves;Ordinary Differential Equations;Autonomous Dynamical Systems;Periodic Solutions;Almost-Periodic Solutions;Recurrent Functions;Nonlinear Oscillations;Stability of Motions

Readership: Researchers in partial differential equations, calculus of variations and optimal control, difference and functional equations.

This book provides new important insights and results by eminent researchers in the considered areas, which will be of interest to researchers and practitioners. The topics considered will be diverse in applications, and will provide contemporary approaches to the problems considered. The areas considered are rapidly evolving. This volume will contribute to their development, and present the current state-of-the-art stochastic processes, analysis, filtering and control.

Contributing authors include: H Albrecher, T Bielecki, F Dufour, M Jeanblanc, I Karatzas, H-H Kuo, A Melnikov, E Platen, G Yin, Q Zhang, C Chiarella, W Fleming, D Madan, R Mamon, J Yan, V Krishnamurthy.

Contents:Stochastic Analysis:On the Connection Between Discrete and Continuous Wick Calculus with an Application to the Fractional Black-Scholes Model (C Bender and P Parczewski)Malliavin Differentiability of a Class of Feller-Diffusions with Relevance in Finance (C-O Ewald, Y Xiao, Y Zou and T K Siu)A Stochastic Integral for Adapted and Instantly Independent Stochastic Processes (H-H Kuo, A Sae-Tang and B Szozda)Independence of Some Multiple Poisson Stochastic Integrals with Variable-Sign Kernels (N Privault)Differential and Stochastic Games:Strategies for Differential Games (W H Fleming and D Hernández-Hernández)BSDE Approach to Non-Zero-Sum Stochastic Differential Games of Control and Stopping (I Karatzas and Q Li)Mathematical Finance:On Optimal Dividend Strategies in Insurance with a Random Time Horizon (H Albrecher and S Thonhauser)Counterparty Risk and the Impact of Collateralization in CDS Contracts (T R Bielecki, I Cialenco and I Iyigunler)A Modern View on Merton's Jump-Diffusion Model (G H L Cheang and C Chiarella)Hedging Portfolio Loss Derivatives with CDS's (A Cousin and M Jeanblanc)New Analytic Approximations for Pricing Spread Options (J van der Hoek and M W Korolkiewicz)On the Polynomial–Normal Model and Option Pricing (H Li and A Melnikov)A Functional Transformation Approach to Interest Rate Modelling(S Luo, J Yan and Q Zhang)S&P 500 Index Option Surface Drivers and Their Risk Neutral and Real World Quadratic Covariations (D B Madan)A Dynamic Portfolio Approach to Asset Markets and Monetary Policy (E Platen and W Semmler)Mean-Variance Portfolio Selection Under Regime-Switching Diffusion Asset Models: A Two-Time-Scale Limit (G Yin and Y Talafha)Filtering and Control:Existence and Uniqueness of Solutions for a Partially Observed Stochastic Control Problem (A Bensoussan, M Çakanyildirim, M Li and S P Sethi)Continuous Control of Piecewise Deterministic Markov Processes with Long Run Average Cost (O L V Costa and F Dufour)Stochastic Linear-Quadratic Control Revisited (T E Duncan)Optimization of Stochastic Uncertain Systems: Entropy Rate Functionals, Minimax Games and Robustness (F Rezaei, C D Charalambous and N U Ahmed)Gradient Based Policy Optimization of Constrained Markov Decision Processes (V Krishnamurthy and F J Vázquez Abad)Parameter Estimation of a Regime-Switching Model Using an Inverse Stieltjes Moment Approach (X Xi, M R Rodrigo and R S Mamon)An Optimal Inventory-Price Coordination Policy (H Zhang and Q Zhang)Readership: Researchers and professionals in stochastic processes, analysis, filtering and control.

Keywords:Stochastic Processes;Filtering;Stochastic Control;Stochastic Analysis;Mathematical Finance;Actuarial Sciences;EngineeringKey Features:This is a festschrift of Professor Robert J Elliott, who is a world leader in the areas of stochastic processes, filtering, control as well as their applicationsIncludes contributions of many world-leading scholars in the fieldsContain many original and fundamental results in the fields rare in competing titles

The book is divided into eight major sections:

* Data Mining and Text Mining

* Information Theory and Statistical Applications

* Asymptotic Behaviour of Stochastic Processes and Random Fields

* Bioinformatics and Markov Chains

* Life Table Data, Survival Analysis, and Risk in Household Insurance

* Neural Networks and Self-Organizing Maps

* Parametric and Nonparametric Statistics

* Statistical Theory and Methods

Advances in Data Analysis is a useful reference for graduate students, researchers, and practitioners in statistics, mathematics, engineering, economics, social science, bioengineering, and bioscience.

The Qlik platform was designed to provide a fast and easy data analytics tool, and QlikView Your Business is your detailed, full-color, step-by-step guide to understanding Qlikview's powerful features and techniques so you can quickly start unlocking your data’s potential. This expert author team brings real-world insight together with practical business analytics, so you can approach, explore, and solve business intelligence problems using the robust Qlik toolset and clearly communicate your results to stakeholders using powerful visualization features in QlikView and Qlik Sense.

This book starts at the basic level and dives deep into the most advanced QlikView techniques, delivering tangible value and knowledge to new users and experienced developers alike. As an added benefit, every topic presented in the book is enhanced with tips, tricks, and insightful recommendations that the authors accumulated through years of developing QlikView analytics.

This is the book for you:

The book covers three common business scenarios - Sales, Profitability, and Inventory Analysis. Each scenario contains four chapters, covering the four main disciplines of business analytics: Business Case, Data Modeling, Scripting, and Visualizations.

The material is organized by increasing levels of complexity. Following our comprehensive tutorial, you will learn simple and advanced QlikView and Qlik Sense concepts, including the following:

Data Modeling:

How to use the Data Load Script language for implementing data modeling techniques How to build and use the QVD data layer Building a multi-tier data architectures Using variables, loops, subroutines, and other script control statements Advanced scripting techniques for a variety of ETL solutions Building Insightful Visualizations in QlikView:

Introduction into QlikView sheet objects — List Boxes, Text Objects, Charts, and more Designing insightful Dashboards in QlikView Using advanced calculation techniques, such as Set Analysis and Advanced Aggregation Using variables for What-If Analysis, as well as using variables for storing calculations, colors, and selection filters Advanced visualization techniques - normalized and non-normalized Mekko charts, Waterfall charts, Whale Tail charts, and more

Building Insightful Visualizations in Qlik Sense:

Whether you are just starting out with QlikView or are ready to dive deeper, QlikView Your Business is your comprehensive guide to sharpening your QlikView skills and unleashing the power of QlikView in your organization.

Across various industries, compensation professionals work to organize and analyze aspects of employment that deal with elements of pay, such as deciding base salary, bonus, and commission provided by an employer to its employees for work performed. Acknowledging the numerous quantitative analyses of data that are a part of this everyday work, Statistics for Compensation provides a comprehensive guide to the key statistical tools and techniques needed to perform those analyses and to help organizations make fully informed compensation decisions.

This self-contained book is the first of its kind to explore the use of various quantitative methods—from basic notions about percents to multiple linear regression—that are used in the management, design, and implementation of powerful compensation strategies. Drawing upon his extensive experience as a consultant, practitioner, and teacher of both statistics and compensation, the author focuses on the usefulness of the techniques and their immediate application to everyday compensation work, thoroughly explaining major areas such as:

Frequency distributions and histograms

Measures of location and variability

Model building

Linear models

Exponential curve models

Maturity curve models

Power models

Market models and salary survey analysis

Linear and exponential integrated market models

Job pricing market models

Throughout the book, rigorous definitions and step-by-step procedures clearly explain and demonstrate how to apply the presented statistical techniques. Each chapter concludes with a set of exercises, and various case studies showcase the topic's real-world relevance. The book also features an extensive glossary of key statistical terms and an appendix with technical details. Data for the examples and practice problems are available in the book and on a related FTP site.

Statistics for Compensation is an excellent reference for compensation professionals, human resources professionals, and other practitioners responsible for any aspect of base pay, incentive pay, sales compensation, and executive compensation in their organizations. It can also serve as a supplement for compensation courses at the upper-undergraduate and graduate levels.

"This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."

—Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded and thoroughly updated to reflect recent advances in the field. The emphasis continues to be on exploratory data analysis rather than statistical theory. The book offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression.

The book now includes a new chapter on the detection and correction of multicollinearity, while also showcasing the use of the discussed methods on newly added data sets from the fields of engineering, medicine, and business. The Fifth Edition also explores additional topics, including:

Surrogate ridge regression Fitting nonlinear models Errors in variables ANOVA for designed experimentsMethods of regression analysis are clearly demonstrated, and examples containing the types of irregularities commonly encountered in the real world are provided. Each example isolates one or two techniques and features detailed discussions, the required assumptions, and the evaluated success of each technique. Additionally, methods described throughout the book can be carried out with most of the currently available statistical software packages, such as the software package R.

Regression Analysis by Example, Fifth Edition is suitable for anyone with an understanding of elementary statistics.

This volume includes information on the underlying mechanisms of microbial emergence, the technology used to detect them, and the strategies available to contain them. The author describes the diseases and their causative agents that are major factors in the health of populations the world over.

The book contains up-to-date selections from infectious disease journals as well as information from the Centers for Disease Control and Prevention, the World Health Organization, MedLine Plus, and the American Society for Microbiology.

Perfect for students or those new to the field, the book contains Summary Overviews (thumbnail sketches of the basic information about the microbe and the associated disease under examination), Review Questions (testing students' knowledge of the material), and Topics for Further Discussion (encouraging a wider conversation on the implications of the disease and challenging students to think creatively to develop new solutions).

This important volume provides broad coverage of a variety of emerging infectious diseases, of which most are directly important to health practitioners in the United States.

This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field of exploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategize and DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy.

The book utilizes Albert Einstein’s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimov was a better writer in spreading science than any textbook or journal author.

Wouldn't it be wonderful if studying statistics were easier? With U Can: Statistics I For Dummies, it is! This one-stop resource combines lessons, practical examples, study questions, and online practice problems to provide you with the ultimate guide to help you score higher in your statistics course. Foundational statistics skills are a must for students of many disciplines, and leveraging study materials such as this one to supplement your statistics course can be a life-saver. Because U Can: Statistics I For Dummies contains both the lessons you need to learn and the practice problems you need to put the concepts into action, you'll breeze through your scheduled study time.

Statistics is all about collecting and interpreting data, and is applicable in a wide range of subject areas—which translates into its popularity among students studying in diverse programs. So, if you feel a bit unsure in class, rest assured that there is an easy way to help you grasp the nuances of statistics!

Understand statistical ideas, techniques, formulas, and calculations Interpret and critique graphs and charts, determine probability, and work with confidence intervals Critique and analyze data from polls and experiments Combine learning and applying your new knowledge with practical examples, practice problems, and expanded online resourcesU Can: Statistics I For Dummies contains everything you need to score higher in your fundamental statistics course!

The ever-growing use of derivative products makes it essential for financial industry practitioners to have a solid understanding of derivative pricing. To cope with the growing complexity, narrowing margins, and shortening life-cycle of the individual derivative product, an efficient, yet modular, implementation of the pricing algorithms is necessary. Mathematical Finance is the first book to harmonize the theory, modeling, and implementation of today's most prevalent pricing models under one convenient cover. Building a bridge from academia to practice, this self-contained text applies theoretical concepts to real-world examples and introduces state-of-the-art, object-oriented programming techniques that equip the reader with the conceptual and illustrative tools needed to understand and develop successful derivative pricing models.

Utilizing almost twenty years of academic and industry experience, the author discusses the mathematical concepts that are the foundation of commonly used derivative pricing models, and insightful Motivation and Interpretation sections for each concept are presented to further illustrate the relationship between theory and practice. In-depth coverage of the common characteristics found amongst successful pricing models are provided in addition to key techniques and tips for the construction of these models. The opportunity to interactively explore the book's principal ideas and methodologies is made possible via a related Web site that features interactive Java experiments and exercises.

While a high standard of mathematical precision is retained, Mathematical Finance emphasizes practical motivations, interpretations, and results and is an excellent textbook for students in mathematical finance, computational finance, and derivative pricing courses at the upper undergraduate or beginning graduate level. It also serves as a valuable reference for professionals in the banking, insurance, and asset management industries.

Key features of Number Theory: Structures, Examples, and Problems:

* A rigorous exposition starts with the natural numbers and the basics.

* Important concepts are presented with an example, which may also emphasize an application. The exposition moves systematically and intuitively to uncover deeper properties.

* Topics include divisibility, unique factorization, modular arithmetic and the Chinese Remainder Theorem, Diophantine equations, quadratic residues, binomial coefficients, Fermat and Mersenne primes and other special numbers, and special sequences. Sections on mathematical induction and the pigeonhole principle, as well as a discussion of other number systems are covered.

* Unique exercises reinforce and motivate the reader, with selected solutions to some of the problems.

* Glossary, bibliography, and comprehensive index round out the text.

Written by distinguished research mathematicians and renowned teachers, this text is a clear, accessible introduction to the subject and a source of fascinating problems and puzzles, from advanced high school students to undergraduates, their instructors, and general readers at all levels.

This book can be used as a text for a year long graduate course in statistics, computer science, or mathematics, for self-study, and as an invaluable research reference on probabiliity and its applications. Particularly worth mentioning are the treatments of distribution theory, asymptotics, simulation and Markov Chain Monte Carlo, Markov chains and martingales, Gaussian processes, VC theory, probability metrics, large deviations, bootstrap, the EM algorithm, confidence intervals, maximum likelihood and Bayes estimates, exponential families, kernels, and Hilbert spaces, and a self contained complete review of univariate probability.