TEST - Catálogo BURRF
   

Multilingual and Multimodal Information Access Evaluation : International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010. Proceedings / edited by Maristella Agosti, Nicola Ferro, Carol Peters, Maarten Rijke, Alan Smeaton.

Por: Colaborador(es): Tipo de material: TextoTextoSeries Lecture Notes in Computer Science ; 6360Editor: Berlin, Heidelberg : Springer Berlin Heidelberg, 2010Descripción: xiii, 145 páginas recurso en líneaTipo de contenido:
  • texto
Tipo de medio:
  • computadora
Tipo de portador:
  • recurso en línea
ISBN:
  • 9783642159985
Formatos físicos adicionales: Edición impresa:: Sin títuloClasificación LoC:
  • P98-98.5
Recursos en línea:
Contenidos:
Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: “Let’s Quickly Test This on 12 TB of Data” -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component–Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.
Resumen: This book constitutes the refereed proceedings of the 11th symposium of the Cross-Language Evaluation Forum, CLEF 2010, held in Padua, Italy, in September 2010 as the First International Conference on Multilingual and Multimodal Information Access Evaluation - in continuation of the popular CLEF campaigns and workshops that have run for the last decade. The 12 revised full papers presented together with 2 keynote talks and 2 panel presentations were carefully reviewed and selected from numerous submissions. The papers include advanced research into the evaluation of complex multimodal and multilingual information systems in order to support individuals, organizations, and communities who design, develop, employ, and improve such systems. The papers are organized in topical sections on resources, tools, and methods; experimental collections and datasets, and evaluation methodologies.
Valoración
    Valoración media: 0.0 (0 votos)
No hay ítems correspondientes a este registro

Springer eBooks

Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: “Let’s Quickly Test This on 12 TB of Data” -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component–Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.

This book constitutes the refereed proceedings of the 11th symposium of the Cross-Language Evaluation Forum, CLEF 2010, held in Padua, Italy, in September 2010 as the First International Conference on Multilingual and Multimodal Information Access Evaluation - in continuation of the popular CLEF campaigns and workshops that have run for the last decade. The 12 revised full papers presented together with 2 keynote talks and 2 panel presentations were carefully reviewed and selected from numerous submissions. The papers include advanced research into the evaluation of complex multimodal and multilingual information systems in order to support individuals, organizations, and communities who design, develop, employ, and improve such systems. The papers are organized in topical sections on resources, tools, and methods; experimental collections and datasets, and evaluation methodologies.

Para consulta fuera de la UANL se requiere clave de acceso remoto.

Universidad Autónoma de Nuevo León
Secretaría de Extensión y Cultura - Dirección de Bibliotecas @
Soportado en Koha