On The Statistical Significance Testing For Natural Language Processing


Download On The Statistical Significance Testing For Natural Language Processing PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get On The Statistical Significance Testing For Natural Language Processing book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Statistical Significance Testing for Natural Language Processing


Statistical Significance Testing for Natural Language Processing

Author: Rotem Dror

language: en

Publisher: Springer Nature

Release Date: 2022-06-01


DOWNLOAD





Data-driven experimental analysis has become the main evaluation tool of Natural Language Processing (NLP) algorithms. In fact, in the last decade, it has become rare to see an NLP paper, particularly one that proposes a new algorithm, that does not include extensive experimental analysis, and the number of involved tasks, datasets, domains, and languages is constantly growing. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: If we, as a community, rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental. The goal of this book is to discuss the main aspects of statistical significance testing in NLP. Our guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one. This question drivesthe field forward as it allows the constant progress of developing better technology for language processing challenges. In practice, researchers and engineers would like to draw the right conclusion from a limited set of experiments, and this conclusion should hold for other experiments with datasets they do not have at their disposal or that they cannot perform due to limited time and resources. The book hence discusses the opportunities and challenges in using statistical significance testing in NLP, from the point of view of experimental comparison between two algorithms. We cover topics such as choosing an appropriate significance test for the major NLP tasks, dealing with the unique aspects of significance testing for non-convex deep neural networks, accounting for a large number of comparisons between two NLP algorithms in a statistically valid manner (multiple hypothesis testing), and, finally, the unique challenges yielded by the nature of the data and practices of the field.

On the Statistical Significance Testing for Natural Language Processing


On the Statistical Significance Testing for Natural Language Processing

Author: Haotian Zhu

language: en

Publisher:

Release Date: 2020


DOWNLOAD





This thesis explores and compares statistical significance tests frequently used in comparing Natural Language Processing (NLP) system performance in several aspects. We begin by establishing the fundamentals of the NLP system performance comparison and formulating it into four major tasks specific to NLP. Each statistical significance test is explained in great detail with its assumptions explicated and testing procedure outlined. We stress the importance of verifying test assumptions before conducting a test. In addition, we examine the effect size and statistical power and discuss their significance in the statistical significance testing in NLP. By considering potential dependencies within a test set, the block bootstrap is introduced and employed to calibrate the statistical significance testing for comparing performance of two systems on average. Four case studies with both simulated and real data, of which the complexity of data dependency varies, are presented to illustrate the process of properly using a statistical significance test in comparing NLP system performance under different settings. We then proceed to discussion from different perspectives, with some open issues such as cross-domain comparison and the violation of i.i.d. assumption, which expects further studies. In conclusion, this thesis advocates the proper use of statistical significance testing in comparing NLP system performance and the reporting of the comparison results in more transparency and completeness.

Validity, Reliability, and Significance


Validity, Reliability, and Significance

Author: Stefan Riezler

language: en

Publisher: Springer Nature

Release Date: 2024-06-09


DOWNLOAD





This book introduces empirical methods for machine learning with a special focus on applications in natural language processing (NLP) and data science. The authors present problems of validity, reliability, and significance and provide common solutions based on statistical methodology to solve them. The book focuses on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows for the detection of circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Lastly, a significance test based on the likelihood ratios of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. The book is self-contained with an appendix on the mathematical background of generalized additive models and linear mixed effects models as well as an accompanying webpage with the related R and Python code to replicate the presented experiments. The second edition also features a new hands-on chapter that illustrates how to use the included tools in practical applications.


Recent Search