Lecture Notes in Statistics

· · ·
Latest release: September 12, 2023
Series
222
Books
2
Volumes
Mathematical Statistics and Probability Theory: Proceedings, Sixth International Conference, Wisła (Poland), 1978
Book 2·Dec 2012
0.0
·
Since 1972 the Institute of Mathematics and the Committee of Mathematics of the Polish Academy of Sciences organize annually con ferences on mathematical statistics in Wisla. The 1978 conference, supported also by the University of Wroclaw,was held in Wisla from December 7 to December 13 and attended by around 100 participants from 11 countries. K. Urbanik, Rector of the University of Wroclaw, was the honorary chairman of the conference. Traditionally at these conferences there are presented results on mathematical statistics and related fields obtained in Poland during the year of the conference as well as results presented by invited scholars from other countries. In 1978 invitations to present talks were accepted by 20 e~inent statisticians and probabilists. The topics of the invited lectures and contributed papers included theoretical statistics with a broad cover of the theory of linear models, inferences from stochastic processes, probability theory and applications to biology and medicine. In these notes there appear papers submitted by 30 participants of the conference. During the conference, on December 9, there was held a special session of the Polish Mathematical Society on the occasion of elect ing Professor Jerzy Neyman the honorary member of the Polish Mathematical Society. At this session W. Orlicz, president of the Polish Mathematical Society, K.Krickeberg,president of the Bernoulli Society. R. Bartoszynski and K. Doksum gave talks on Neyman IS con tribution to statistics, his organizational achievements in the U.S.
Benefit-Cost Analysis of Data Used to Allocate Funds
Book 3·Dec 2012
3.0
·
This monograph treats the question of determining how much to spend for the collection and analysis of public data. This difficult problem for government statisticians and policy-makers is likely to become even more pressing in the near future. The approach taken here is to estimate and compare the benefits and costs of alternative data programs. Since data are used in many ways, the benefits are hard to measure. The strategy I have adopted focuses on use of data to determine fund allocations, particularly in the General Revenue Sharing program. General Revenue Sharing is one of the largest allocation programs in the United States. That errors in population counts and other data cause sizable errors in allocation has been much publicized. Here we analyze whether the accuracy of the 1970 census of population and other data used by General Revenue Sharing should be improved. Of course it is too late to change the 1970 census program, but the method and techniques of analysis will apply to future data programs. In partic ular, benefit-cost analyses such as this are necessary for informed decisions about whether the expense of statistical programs is justi fied or not. For example, although a law authorizing a mid-decade census was enacted in 1976, there exists great doubt whether funds will be provided so a census can take place in 1985. (The President's Budget for 1981 allows no money for the mid-decade census, despite the Census Bureau's request for $1. 9 million for planning purposes.
Stationary Random Processes Associated with Point Processes
Book 5·Dec 2012
0.0
·
In this set of notes we study a notion of a random process assoc- ted with a point process. The presented theory was inSpired by q- ueing problems. However it seems to be of interest in other branches of applied probability, as for example reliability or dam theory. Using developed tools, we work out known, aswell as new results from queueing or dam theory. Particularly queues which cannot be treated by standard techniques serve as illustrations of the theory. In Chapter 1 the preliminaries are given. We acquaint the reader with the main ideas of these notes, introduce some useful notations, concepts and abbreviations. He also recall basic facts from ergodic theory, an important mathematical tool employed in these notes. Finally some basic notions from queues are reviewed. Chapter 2 deals with discrete time theory. It serves two purposes. The first one is to let the reader get acquainted with the main lines of the theory needed in continuous time without being bothered by tech nical details. However the discrete time theory also seems to be of interest itself. There are examples which have no counte~ in continuous time. Chapter 3 deals with continuous time theory. It also contains many basic results from queueing or dam theory. Three applications of the continuous time theory are given in Chapter 4. We show how to use the theory in order to get some useful bounds for the stationary distribution of a random process.
Asymptotic Efficiency of Statistical Estimators: Concepts and Higher Order Asymptotic Efficiency: Concepts and Higher Order Asymptotic Efficiency
Book 7·Dec 2012
0.0
·
This monograph is a collection of results recently obtained by the authors. Most of these have been published, while others are awaitlng publication. Our investigation has two main purposes. Firstly, we discuss higher order asymptotic efficiency of estimators in regular situa tions. In these situations it is known that the maximum likelihood estimator (MLE) is asymptotically efficient in some (not always specified) sense. However, there exists here a whole class of asymptotically efficient estimators which are thus asymptotically equivalent to the MLE. It is required to make finer distinctions among the estimators, by considering higher order terms in the expansions of their asymptotic distributions. Secondly, we discuss asymptotically efficient estimators in non regular situations. These are situations where the MLE or other estimators are not asymptotically normally distributed, or where l 2 their order of convergence (or consistency) is not n / , as in the regular cases. It is necessary to redefine the concept of asympto tic efficiency, together with the concept of the maximum order of consistency. Under the new definition as asymptotically efficient estimator may not always exist. We have not attempted to tell the whole story in a systematic way. The field of asymptotic theory in statistical estimation is relatively uncultivated. So, we have tried to focus attention on such aspects of our recent results which throw light on the area.
Anaesthesiologische Probleme in der Gefäßchirurgie: 2. Rheingau-Workshop
Book 8·Dec 2012
0.0
·
The first Pannonian Symposium on Mathematical Statistics was held at Bad Tatzmannsdorf (Burgenland/Austria) from September 16th to 21st, 1979. The aim of it was to furthe~ and intensify scientific cooperation in the Pannonian area, which, in a broad sense, can be understood to cover Hungary, the eastern part of Austria, Czechoslovakia, and parts of Poland, ¥ugoslavia and Romania. The location of centers of research in mathematical statistics and probability theory in this territory has been a good reason for the geographical limitation of this meeting. About 70 researchers attended this symposium, and 49 lectures were delivered; a considerable part of the presented papers is collected in this volume. Beside the lectures, vigorous informal discussions among the participants took place, so that many problems were raised and possible ways of solutions were attacked. We take the opportunity to thank Dr. U. Dieter (Graz), Dr. F. Konecny (Wien), Dr. W. Krieger (G8ttingen) and Dr. E. Neuwirth (Wien) for their valuable help in the refereeing work for this volume. The Pannonian Symposium could not have taken place without the support of several institutions: The Austrian Ministry for Research and Science, the State government of Burgenland, the Community Bad Tatzmannsdorf, the Kurbad Tatzmannsdorf AG, the Austrian Society for Information Science and Statistics, IBM Austria, Volksbank Oberwart, Erste Osterreichische Spar-Casse and Spielbanken AG Austria. The Austrian Academy of Sciences iv made possible the participation in the Symposium for several mathematicians. We express our gratitude to all these institutions for their generous help.
Random Coefficient Autoregressive Models: An Introduction: An Introduction
Book 11·Dec 2012
4.0
·
In this monograph we have considered a class of autoregressive models whose coefficients are random. The models have special appeal among the non-linear models so far considered in the statistical literature, in that their analysis is quite tractable. It has been possible to find conditions for stationarity and stability, to derive estimates of the unknown parameters, to establish asymptotic properties of these estimates and to obtain tests of certain hypotheses of interest. We are grateful to many colleagues in both Departments of Statistics at the Australian National University and in the Department of Mathematics at the University of Wo110ngong. Their constructive criticism has aided in the presentation of this monograph. We would also like to thank Dr M. A. Ward of the Department of Mathematics, Australian National University whose program produced, after minor modifications, the "three dimensional" graphs of the log-likelihood functions which appear on pages 83-86. Finally we would like to thank J. Radley, H. Patrikka and D. Hewson for their contributions towards the typing of a difficult manuscript. IV CONTENTS CHAPTER 1 INTRODUCTION 1. 1 Introduction 1 Appendix 1. 1 11 Appendix 1. 2 14 CHAPTER 2 STATIONARITY AND STABILITY 15 2. 1 Introduction 15 2. 2 Singly-Infinite Stationarity 16 2. 3 Doubly-Infinite Stationarity 19 2. 4 The Case of a Unit Eigenvalue 31 2. 5 Stability of RCA Models 33 2. 6 Strict Stationarity 37 Appendix 2. 1 38 CHAPTER 3 LEAST SQUARES ESTIMATION OF SCALAR MODELS 40 3.
Statistical Analysis of Counting Processes
Book 12·Dec 2012
1.0
·
A first version of these lecture notes was prepared for a course given in 1980 at the University of Copenhagen to a class of graduate students in mathematical statistics. A thorough revision has led to the result presented here. The main topic of the notes is the theory of multiplicative intens ity models for counting processes, first introduced by Odd Aalen in his Ph.D. thesis from Berkeley 1975, and in a subsequent fundamental paper in the Annals of Statistics 1978. In Copenhagen the interest in statistics on counting processes was sparked by a visit by Odd Aalen in 1976. At present the activities here are centered around Niels Keiding and his group at the Statistical Re search Unit. The Aalen theory is a fine example of how advanced probability theory may be used to develop a povlerful, and for applications very re levant, statistical technique. Aalen's work relies quite heavily on the 'theorie generale des processus' developed primarily by the French school of probability the ory. But the general theory aims at much more general and profound re sults, than what is required to deal with objects of such a relatively simple structure as counting processes on the line. Since also this process theory is virtually inaccessible to non-probabilists, it would appear useful to have an account of what Aalen has done, that includes exactly the amount of probability required to deal satisfactorily and rigorously with statistical models for counting processes.
GLIM 82: Proceedings of the International Conference on Generalised Linear Models: Proceedings of the International Conference on Generalised Linear Models
Book 14·Dec 2012
0.0
·
This volume of Lecture Notes in Statistics consists of the published proceedings of the first international conference to be held on the topic of generalised linear models. This conference was held from 13 - 15 September 1982 at the Polytechnic of North London and marked an important stage in the development and expansion of the GLIM system. The range of the new system, tentatively named Prism, is here outlined by Bob Baker. Further sections of the volume are devoted to more detailed descriptions of the new facilities, including information on the two different numerical methods now available. Most of the data analyses in this volume are carried out using the GLIM system but this is, of course, not necessary. There are other ways of analysing generalised linear models and Peter Green here discusses the many attractive features of APL, including its ability to analyse generalised linear models. Later sections of the volume cover other invited and contributed papers on the theory and application of generalised linear models. Included amongst these is a paper by Murray Aitkin, proposing a unified approach to statistical modelling through direct likelihood inference, and a paper by Daryl Pregibon showing how GLIM can be programmed to carry out score tests. A paper by Joe Whittaker extends the recent discussion of the relationship between conditional independence and log-linear models and John Hinde considers the introduction of an independent random variable into a linear model to allow for unexplained variation in Poisson data.
Specifying Statistical Models: From Parametric to Non-Parametric, Using Bayesian or Non-Bayesian Approaches
Book 16·Dec 2012
0.0
·
During the last decades. the evolution of theoretical statistics has been marked by a considerable expansion of the number of mathematically and computationaly trac table models. Faced with this inflation. applied statisticians feel more and more un comfortable: they are often hesitant about their traditional (typically parametric) assumptions. such as normal and i. i. d . • ARMA forms for time-series. etc . • but are at the same time afraid of venturing into the jungle of less familiar models. The prob lem of the justification for taking up one model rather than another one is thus a crucial one. and can take different forms. (a) ~~~£ifi~~~iQ~ : Do observations suggest the use of a different model from the one initially proposed (e. g. one which takes account of outliers). or do they render plau sible a choice from among different proposed models (e. g. fixing or not the value of a certai n parameter) ? (b) tlQ~~L~~l!rQ1!iIMHQ~ : How is it possible to compute a "distance" between a given model and a less (or more) sophisticated one. and what is the technical meaning of such a "distance" ? (c) BQe~~~~~~ : To what extent do the qualities of a procedure. well adapted to a "small" model. deteriorate when this model is replaced by a more general one? This question can be considered not only. as usual. in a parametric framework (contamina tion) or in the extension from parametriC to non parametric models but also.
Asymptotic Optimal Inference for Non-ergodic Models
Book 17·Dec 2012
0.0
·
This monograph contains a comprehensive account of the recent work of the authors and other workers on large sample optimal inference for non-ergodic models. The non-ergodic family of models can be viewed as an extension of the usual Fisher-Rao model for asymptotics, referred to here as an ergodic family. The main feature of a non-ergodic model is that the sample Fisher information, appropriately normed, converges to a non-degenerate random variable rather than to a constant. Mixture experiments, growth models such as birth processes, branching processes, etc. , and non-stationary diffusion processes are typical examples of non-ergodic models for which the usual asymptotics and the efficiency criteria of the Fisher-Rao-Wald type are not directly applicable. The new model necessitates a thorough review of both technical and qualitative aspects of the asymptotic theory. The general model studied includes both ergodic and non-ergodic families even though we emphasise applications of the latter type. The plan to write the monograph originally evolved through a series of lectures given by the first author in a graduate seminar course at Cornell University during the fall of 1978, and by the second author at the University of Munich during the fall of 1979. Further work during 1979-1981 on the topic has resolved many of the outstanding conceptual and technical difficulties encountered previously. While there are still some gaps remaining, it appears that the mainstream development in the area has now taken a more definite shape.
Conjugate Duality and the Exponential Fourier Spectrum
Book 18·Dec 2012
0.0
·
For some fields such as econometrics (Shore, 1980), oil prospecting (Claerbout, 1976), speech recognition (Levinson and Lieberman, 1981), satellite monitoring (Lavergnat et al., 1980), epilepsy diagnosis (Gersch and Tharp, 1977), and plasma physics (Bloomfield, 1976), there is a need to obtain an estimate of the spectral density (when it exists) in order to gain at least a crude understanding of the frequency content of time series data. An outstanding tutorial on the classical problem of spectral density estimation is given by Kay and Marple (1981). For an excellent collection of fundamental papers dealing with modern spec tral density estimation as well as an extensive bibliography on other fields of application, see Childers (1978). To devise a high-performance sample spectral density estimator, one must develop a rational basis for its construction, provide a feasible algorithm, and demonstrate its performance with respect to prescribed criteria. An algorithm is certainly feasible if it can be implemented on a computer, possesses computational efficiency (as measured by compu tational complexity analysis), and exhibits numerical stability. An estimator shows high performance if it is insensitive to violations of its underlying assumptions (i.e., robust), consistently shows excellent frequency resolutipn under realistic sample sizes and signal-to-noise power ratios, possesses a demonstrable numerical rate of convergence to the true population spectral density, and/or enjoys demonstrable asymp totic statistical properties such as consistency and efficiency.