Literatur zur Informationserschließung
Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft
/
Powered by litecat, BIS Oldenburg
(Stand: 28. April 2022)
Suche
Suchergebnisse
Treffer 1–20 von 20
sortiert nach:
-
1Dang, E.K.F. ; Luk, R.W.P. ; Allan, J.: ¬A context-dependent relevance model.
In: Journal of the Association for Information Science and Technology. 67(2016) no.3, S.582-593.
Abstract: Numerous past studies have demonstrated the effectiveness of the relevance model (RM) for information retrieval (IR). This approach enables relevance or pseudo-relevance feedback to be incorporated within the language modeling framework of IR. In the traditional RM, the feedback information is used to improve the estimate of the query language model. In this article, we introduce an extension of RM in the setting of relevance feedback. Our method provides an additional way to incorporate feedback via the improvement of the document language models. Specifically, we make use of the context information of known relevant and nonrelevant documents to obtain weighted counts of query terms for estimating the document language models. The context information is based on the words (unigrams or bigrams) appearing within a text window centered on query terms. Experiments on several Text REtrieval Conference (TREC) collections show that our context-dependent relevance model can improve retrieval performance over the baseline RM. Together with previous studies within the BM25 framework, our current study demonstrates that the effectiveness of our method for using context information in IR is quite general and not limited to any specific retrieval model.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23419/abstract.
Themenfeld: Retrievalstudien
-
2Dang, E.K.F. ; Luk, R.W.P. ; Allan, J.: Beyond bag-of-words : bigram-enhanced context-dependent term weights.
In: Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1134-1148.
Abstract: While term independence is a widely held assumption in most of the established information retrieval approaches, it is clearly not true and various works in the past have investigated a relaxation of the assumption. One approach is to use n-grams in document representation instead of unigrams. However, the majority of early works on n-grams obtained only modest performance improvement. On the other hand, the use of information based on supporting terms or "contexts" of queries has been found to be promising. In particular, recent studies showed that using new context-dependent term weights improved the performance of relevance feedback (RF) retrieval compared with using traditional bag-of-words BM25 term weights. Calculation of the new term weights requires an estimation of the local probability of relevance of each query term occurrence. In previous studies, the estimation of this probability was based on unigrams that occur in the neighborhood of a query term. We explore an integration of the n-gram and context approaches by computing context-dependent term weights based on a mixture of unigrams and bigrams. Extensive experiments are performed using the title queries of the Text Retrieval Conference (TREC)-6, TREC-7, TREC-8, and TREC-2005 collections, for RF with relevance judgment of either the top 10 or top 20 documents of an initial retrieval. We identify some crucial elements needed in the use of bigrams in our methods, such as proper inverse document frequency (IDF) weighting of the bigrams and noise reduction by pruning bigrams with large document frequency values. We show that enhancing context-dependent term weights with bigrams is effective in further improving retrieval performance.
Themenfeld: Retrievalalgorithmen
Objekt: Bigrams
-
3Dang, E.K.F. ; Luk, R.W.P. ; Allan, J. ; Ho, K.S. ; Chung, K.F.L. ; Lee, D.L.: ¬A new context-dependent term weight computed by boost and discount using relevance information.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2514-2530.
Abstract: We studied the effectiveness of a new class of context-dependent term weights for information retrieval. Unlike the traditional term frequency-inverse document frequency (TF-IDF), the new weighting of a term t in a document d depends not only on the occurrence statistics of t alone but also on the terms found within a text window (or "document-context") centered on t. We introduce a Boost and Discount (B&D) procedure which utilizes partial relevance information to compute the context-dependent term weights of query terms according to a logistic regression model. We investigate the effectiveness of the new term weights compared with the context-independent BM25 weights in the setting of relevance feedback. We performed experiments with title queries of the TREC-6, -7, -8, and 2005 collections, comparing the residual Mean Average Precision (MAP) measures obtained using B&D term weights and those obtained by a baseline using BM25 weights. Given either 10 or 20 relevance judgments of the top retrieved documents, using the new term weights yields improvement over the baseline for all collections tested. The MAP obtained with the new weights has relative improvement over the baseline by 3.3 to 15.2%, with statistical significance at the 95% confidence level across all four collections.
Themenfeld: Retrievalalgorithmen
-
4Kumaran, G. ; Allan, J.: Adapting information retrieval systems to user queries.
In: Information processing and management. 44(2008) no.6, S.1838-1862.
Abstract: Users enter queries that are short as well as long. The aim of this work is to evaluate techniques that can enable information retrieval (IR) systems to automatically adapt to perform better on such queries. By adaptation we refer to (1) modifications to the queries via user interaction, and (2) detecting that the original query is not a good candidate for modification. We show that the former has the potential to improve mean average precision (MAP) of long and short queries by 40% and 30% respectively, and that simple user interaction can help towards this goal. We observed that after inspecting the options presented to them, users frequently did not select any. We present techniques in this paper to determine beforehand the utility of user interaction to avoid this waste of time and effort. We show that our techniques can provide IR systems with the ability to detect and avoid interaction for unpromising queries without a significant drop in overall performance.
Anmerkung: Beitrag in einem Themenheft "Adaptive information retrieval"
-
5Avrahami, T.T. ; Yau, L. ; Si, L. ; Callan, J.P.: ¬The FedLemur project : Federated search in the real world.
In: Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.347-358.
Abstract: Federated search and distributed information retrieval systems provide a single user interface for searching multiple full-text search engines. They have been an active area of research for more than a decade, but in spite of their success as a research topic, they are still rare in operational environments. This article discusses a prototype federated search system developed for the U.S. government's FedStats Web portal, and the issues addressed in adapting research solutions to this operational environment. A series of experiments explore how well prior research results, parameter settings, and heuristics apply in the FedStats environment. The article concludes with a set of lessons learned from this technology transfer effort, including observations about search engine quality in the real world.
Themenfeld: Verteilte bibliographische Datenbanken
-
6Collins-Thompson, K. ; Callan, J.: Predicting reading difficulty with statistical language models.
In: Journal of the American Society for Information Science and Technology. 56(2005) no.13, S.1448-1462.
Abstract: A potentially useful feature of information retrieval systems for students is the ability to identify documents that not only are relevant to the query but also match the student's reading level. Manually obtaining an estimate of reading difficulty for each document is not feasible for very large collections, so we require an automated technique. Traditional readability measures, such as the widely used Flesch-Kincaid measure, are simple to apply but perform poorly an Web pages and other nontraditional documents. This work focuses an building a broadly applicable statistical model of text for different reading levels that works for a wide range of documents. To do this, we recast the weIl-studied problem of readability in terms of text categorization and use straightforward techniques from statistical language modeling. We show that with a modified form of text categorization, it is possible to build generally applicable cIassifiers with relatively little training data. We apply this method to the problem of classifying Web pages according to their reading difficulty level and show that by using a mixture model to interpolate evidence of a word's frequency across grades, it is possible to build a classifier that achieves an average root mean squared error of between one and two grade levels for 9 of 12 grades. Such cIassifiers have very efficient implementations and can be applied in many different scenarios. The models can be varied to focus an smaller or larger grade ranges or easily retrained for a variety of tasks or populations.
-
7Robertson, S. ; Callan, J.: Routing and filtering.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S.99-122.
Themenfeld: Retrievalstudien
Objekt: TREC
-
8Allan, J. ; Croft, W.B. ; Callan, J.: ¬The University of Massachusetts and a dozen TRECs.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S.261-286.
Themenfeld: Retrievalstudien
Objekt: TREC
-
9Callan, J.: Distributed information retrieval.
In: Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft. Boston, MA : Kluwer Academic Publ., 2000. S.127-150.
(The Kluwer international series on information retrieval; 7)
Abstract: A multi-database model of distributed information retrieval is presented, in which people are assumed to have access to many searchable text databases. In such an environment, full-text information retrieval consists of discovering database contents, ranking databases by their expected ability to satisfy the query, searching a small number of databases, and merging results returned by different databases. This paper presents algorithms for each task. It also discusses how to reorganize conventional test collections into multi-database testbeds, and evaluation methodologies for multi-database experiments. A broad and diverse group of experimental results is presented to demonstrate that the algorithms are effective, efficient, robust, and scalable
Themenfeld: Verteilte bibliographische Datenbanken
-
10Papka, R. ; Allan, J.: Topic detection and tracking : event clustering as a basis for first story detection.
In: Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft. Boston, MA : Kluwer Academic Publ., 2000. S.97-126.
(The Kluwer international series on information retrieval; 7)
Abstract: Topic Detection and Tracking (TDT) is a new research area that investigates the organization of information by event rather than by subject. In this paper, we provide an overview of the TDT research program from its inception to the third phrase that is now underway. We also discuss our approach to two of the TDT problems in detail. For event clustering (Detection), we show that classic Information Retrieval clustering techniques can be modified slightly to provide effective solutions. For first story detection, we show that similar methods provide satisfactory results, although substantial work remains. In both cases, we explore solutions that model the temporal relationship between news stories. We also investigate the use of phrase extraction to capture the who, what, when, and where contained in news
-
11Allan, J.: Building hypertext using information retrieval.
In: Information processing and management. 33(1997) no.2, S.145-159.
Abstract: Presents entirely automatic methods for gathering documents for a hypertext, linking the set, and annotating those connections with a description of the type of the link. Document linking is based upon information retrieval similarity measures with adjustable levels of strictness. Applies an approach inspired by relationship visualization techniques and by graph simplification, to show how to identify automatically tangential, revision, summary, expansion, comparisn, contrast, equivalence, and aggregate links
Anmerkung: Contribution to a special issue on methods and tools for the automatic construction of hypertext
Themenfeld: Hypertext
-
12Agosti, M. ; Allan, J.: Introduction to the special issue on methods and tools for the automatic construction of hypertext.
In: Information processing and management. 33(1997) no.2, S.129-131.
Abstract: Introduces the special issue. Discusses the problem
Anmerkung: Contribution to a special issue on methods and tools for the automatic construction of hypertext
Themenfeld: Hypertext
-
13Allan, J. ; Callan, J.P. ; Croft, W.B. ; Ballesteros, L. ; Broglio, J. ; Xu, J. ; Shu, H.: INQUERY at TREC-5.
In: The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman. Gaithersburgh, MD : National Institute of Standards and Technology, 1997. S.191-197.
(NIST special publication;)
Themenfeld: Retrievalstudien
Objekt: TREC ; INQUERY
-
14Salton, G. ; Allan, J. ; Singhal, A.: Automatic text decomposition and structuring.
In: Information processing and management. 32(1996) no.2, S.127-138.
Abstract: Sophisticated text similarity measurements are used to determine relationships between natural language text and text excerpts. The resulting linked hypertext maps can be decomposed into text segments and text theme, and these decompositions are usable to identify different text types and text structures, leading to improved text access and utilization. Gives examples of text decomposition for expository and non expository texts
Themenfeld: Automatisches Indexieren
-
15Allan, J. ; Ballesteros, L. ; Callan, J.P. ; Croft, W.B. ; Lu, Z.: Recent experiment with INQUERY.
In: The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman. Gaithersburgh, MD : National Institute of Standards and Technology, 1996. S.49-63.
(NIST special publication; 500-236)
Objekt: INQUERY ; TREC
-
16Salton, G. ; Allan, J.: Selective text utilization and text traversal.
In: International journal of human-computer studies. 43(1995) no.3, S.xxx-xxx.
-
17Callan, J. ; Croft, W.B. ; Broglio, J.: TREC and TIPSTER experiments with INQUERY.
In: Information processing and management. 31(1995) no.3, S.327-343.
Anmerkung: Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.436-439.
Themenfeld: Retrievalstudien
Objekt: TREC ; TIPSTER ; INQUERY
-
18Buckley, C. ; Allan, J. ; Salton, G.: Automatic routing and retrieval using Smart : TREC-2.
In: Information processing and management. 31(1995) no.3, S.315-326.
Abstract: The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
Themenfeld: Automatisches Indexieren ; Retrievalstudien ; Semantisches Umfeld in Indexierung u. Retrieval
Objekt: Smart ; TREC
-
19Salton, G. ; Allan, J. ; Buckley, C. ; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts.
In: Science. 264(1994), S.1421-1426.
Anmerkung: Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
Themenfeld: Automatisches Indexieren ; Automatisches Abstracting
-
20Salton, G. ; Buckley, C. ; Allan, J.: Automatic structuring of text files.
In: Electronic publishing. 5(1992) no.1, S.1-17.
Abstract: In many practical information retrieval situations, it is necessary to process heterogeneous text databases that vary greatly in scope and coverage and deal with many different subjects. In such an environment it is important to provide flexible access to individual text pieces and to structure the collection so that related text elements are identified and properly linked. Describes methods for the automatic structuring of heterogeneous text collections and the construction of browsing tools and access procedures that facilitate collection use. Illustrates these emthods with searches using a large automated encyclopedia
Themenfeld: Automatisches Indexieren