Investigating Different Models For Cross Language Information Retrieval From Automatic Speech Transcripts


Download Investigating Different Models For Cross Language Information Retrieval From Automatic Speech Transcripts PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Investigating Different Models For Cross Language Information Retrieval From Automatic Speech Transcripts book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Experimental IR Meets Multilinguality, Multimodality, and Interaction


Experimental IR Meets Multilinguality, Multimodality, and Interaction

Author: Josanne Mothe

language: en

Publisher: Springer

Release Date: 2015-08-31


DOWNLOAD





This book constitutes the refereed proceedings of the 6th International Conference of the CLEF Initiative, CLEF 2015, held in Toulouse, France, in September 2015. The 31 full papers and 20 short papers presented were carefully reviewed and selected from 68 submissions. They cover a broad range of issues in the fields of multilingual and multimodal information access evaluation, also included are a set of labs and workshops designed to test different aspects of mono and cross-language information retrieval systems.

Cross-Language Information Retrieval


Cross-Language Information Retrieval

Author: Jian-Yun Nie

language: en

Publisher: Springer Nature

Release Date: 2022-05-31


DOWNLOAD





Search for information is no longer exclusively limited within the native language of the user, but is more and more extended to other languages. This gives rise to the problem of cross-language information retrieval (CLIR), whose goal is to find relevant information written in a different language to a query. In addition to the problems of monolingual information retrieval (IR), translation is the key problem in CLIR: one should translate either the query or the documents from a language to another. However, this translation problem is not identical to full-text machine translation (MT): the goal is not to produce a human-readable translation, but a translation suitable for finding relevant documents. Specific translation methods are thus required. The goal of this book is to provide a comprehensive description of the specific problems arising in CLIR, the solutions proposed in this area, as well as the remaining problems. The book starts with a general description of the monolingual IR and CLIR problems. Different classes of approaches to translation are then presented: approaches using an MT system, dictionary-based translation and approaches based on parallel and comparable corpora. In addition, the typical retrieval effectiveness using different approaches is compared. It will be shown that translation approaches specifically designed for CLIR can rival and outperform high-quality MT systems. Finally, the book offers a look into the future that draws a strong parallel between query expansion in monolingual IR and query translation in CLIR, suggesting that many approaches developed in monolingual IR can be adapted to CLIR. The book can be used as an introduction to CLIR. Advanced readers can also find more technical details and discussions about the remaining research challenges in the future. It is suitable to new researchers who intend to carry out research on CLIR. Table of Contents: Preface / Introduction / Using Manually Constructed Translation Systems and Resources for CLIR / Translation Based on Parallel and Comparable Corpora / Other Methods to Improve CLIR / A Look into the Future: Toward a Unified View of Monolingual IR and CLIR? / References / Author Biography

Multilingual Information Access Evaluation I - Text Retrieval Experiments


Multilingual Information Access Evaluation I - Text Retrieval Experiments

Author: Carol Peters

language: en

Publisher: Springer

Release Date: 2010-09-03


DOWNLOAD





The tenth campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2009. There were eight main eval- tion tracks in CLEF 2009 plus a pilot task. The aim, as usual, was to test the perfo- ance of a wide range of multilingual information access (MLIA) systems or system components. This year, about 150 groups, mainly but not only from academia, reg- tered to participate in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia. The results were presented at a two-and-a-half day workshop held in Corfu, Greece, September 30 to October 2, 2009, in conjunction with the European Conference on Digital Libraries. The workshop, attended by 160 researchers and system developers, provided the opportunity for all the groups that had participated in the evaluation campaign to get together, compare approaches and exchange ideas.