Non Regular Statistical Estimation Ii


Download Non Regular Statistical Estimation Ii PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Non Regular Statistical Estimation Ii book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Non-Regular Statistical Estimation


Non-Regular Statistical Estimation

Author: Masafumi Akahira

language: en

Publisher: Springer Science & Business Media

Release Date: 2012-12-06


DOWNLOAD





In order to obtain many of the classical results in the theory of statistical estimation, it is usual to impose regularity conditions on the distributions under consideration. In small sample and large sample theories of estimation there are well established sets of regularity conditions, and it is worth while to examine what may follow if any one of these regularity conditions fail to hold. "Non-regular estimation" literally means the theory of statistical estimation when some or other of the regularity conditions fail to hold. In this monograph, the authors present a systematic study of the meaning and implications of regularity conditions, and show how the relaxation of such conditions can often lead to surprising conclusions. Their emphasis is on considering small sample results and to show how pathological examples may be considered in this broader framework.

STATISTICAL INFERENCE FOR NON REGULAR FAMILY OF DISTRIBUTIONS (UNIFIED THEORY)


STATISTICAL INFERENCE FOR NON REGULAR FAMILY OF DISTRIBUTIONS (UNIFIED THEORY)

Author: Milind B. Bhatt

language: en

Publisher: Lulu.com

Release Date:


DOWNLOAD





Statistical Estimation


Statistical Estimation

Author: I.A. Ibragimov

language: en

Publisher: Springer Science & Business Media

Release Date: 2013-11-11


DOWNLOAD





when certain parameters in the problem tend to limiting values (for example, when the sample size increases indefinitely, the intensity of the noise ap proaches zero, etc.) To address the problem of asymptotically optimal estimators consider the following important case. Let X 1, X 2, ... , X n be independent observations with the joint probability density !(x,O) (with respect to the Lebesgue measure on the real line) which depends on the unknown patameter o e 9 c R1. It is required to derive the best (asymptotically) estimator 0:( X b ... , X n) of the parameter O. The first question which arises in connection with this problem is how to compare different estimators or, equivalently, how to assess their quality, in terms of the mean square deviation from the parameter or perhaps in some other way. The presently accepted approach to this problem, resulting from A. Wald's contributions, is as follows: introduce a nonnegative function w(0l> ( ), Ob Oe 9 (the loss function) and given two estimators Of and O! n 2 2 the estimator for which the expected loss (risk) Eown(Oj, 0), j = 1 or 2, is smallest is called the better with respect to Wn at point 0 (here EoO is the expectation evaluated under the assumption that the true value of the parameter is 0). Obviously, such a method of comparison is not without its defects.