Literatur zur Informationserschließung
Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft
/
Powered by litecat, BIS Oldenburg
(Stand: 28. April 2022)
Suche
Suchergebnisse
Treffer 1–4 von 4
sortiert nach:

1Dang, E.K.F. ; Luk, R.W.P. ; Allan, J.: ¬A contextdependent relevance model.
In: Journal of the Association for Information Science and Technology. 67(2016) no.3, S.582593.
Abstract: Numerous past studies have demonstrated the effectiveness of the relevance model (RM) for information retrieval (IR). This approach enables relevance or pseudorelevance feedback to be incorporated within the language modeling framework of IR. In the traditional RM, the feedback information is used to improve the estimate of the query language model. In this article, we introduce an extension of RM in the setting of relevance feedback. Our method provides an additional way to incorporate feedback via the improvement of the document language models. Specifically, we make use of the context information of known relevant and nonrelevant documents to obtain weighted counts of query terms for estimating the document language models. The context information is based on the words (unigrams or bigrams) appearing within a text window centered on query terms. Experiments on several Text REtrieval Conference (TREC) collections show that our contextdependent relevance model can improve retrieval performance over the baseline RM. Together with previous studies within the BM25 framework, our current study demonstrates that the effectiveness of our method for using context information in IR is quite general and not limited to any specific retrieval model.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23419/abstract.
Themenfeld: Retrievalstudien

2Dang, E.K.F. ; Luk, R.W.P. ; Allan, J.: Beyond bagofwords : bigramenhanced contextdependent term weights.
In: Journal of the Association for Information Science and Technology. 65(2014) no.6, S.11341148.
Abstract: While term independence is a widely held assumption in most of the established information retrieval approaches, it is clearly not true and various works in the past have investigated a relaxation of the assumption. One approach is to use ngrams in document representation instead of unigrams. However, the majority of early works on ngrams obtained only modest performance improvement. On the other hand, the use of information based on supporting terms or "contexts" of queries has been found to be promising. In particular, recent studies showed that using new contextdependent term weights improved the performance of relevance feedback (RF) retrieval compared with using traditional bagofwords BM25 term weights. Calculation of the new term weights requires an estimation of the local probability of relevance of each query term occurrence. In previous studies, the estimation of this probability was based on unigrams that occur in the neighborhood of a query term. We explore an integration of the ngram and context approaches by computing contextdependent term weights based on a mixture of unigrams and bigrams. Extensive experiments are performed using the title queries of the Text Retrieval Conference (TREC)6, TREC7, TREC8, and TREC2005 collections, for RF with relevance judgment of either the top 10 or top 20 documents of an initial retrieval. We identify some crucial elements needed in the use of bigrams in our methods, such as proper inverse document frequency (IDF) weighting of the bigrams and noise reduction by pruning bigrams with large document frequency values. We show that enhancing contextdependent term weights with bigrams is effective in further improving retrieval performance.
Themenfeld: Retrievalalgorithmen
Objekt: Bigrams

3Dang, E.K.F. ; Luk, R.W.P. ; Allan, J. ; Ho, K.S. ; Chung, K.F.L. ; Lee, D.L.: ¬A new contextdependent term weight computed by boost and discount using relevance information.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.25142530.
Abstract: We studied the effectiveness of a new class of contextdependent term weights for information retrieval. Unlike the traditional term frequencyinverse document frequency (TFIDF), the new weighting of a term t in a document d depends not only on the occurrence statistics of t alone but also on the terms found within a text window (or "documentcontext") centered on t. We introduce a Boost and Discount (B&D) procedure which utilizes partial relevance information to compute the contextdependent term weights of query terms according to a logistic regression model. We investigate the effectiveness of the new term weights compared with the contextindependent BM25 weights in the setting of relevance feedback. We performed experiments with title queries of the TREC6, 7, 8, and 2005 collections, comparing the residual Mean Average Precision (MAP) measures obtained using B&D term weights and those obtained by a baseline using BM25 weights. Given either 10 or 20 relevance judgments of the top retrieved documents, using the new term weights yields improvement over the baseline for all collections tested. The MAP obtained with the new weights has relative improvement over the baseline by 3.3 to 15.2%, with statistical significance at the 95% confidence level across all four collections.
Themenfeld: Retrievalalgorithmen

4Dang, E.K.F. ; Luk, R.W.P. ; Ho, K.S. ; Chan, S.C.F. ; Lee, D.L.: ¬A new measure of clustering effectiveness : algorithms and experimental studies.
In: Journal of the American Society for Information Science and Technology. 59(2008) no.3, S.390406.
Abstract: We propose a new optimal clustering effectiveness measure, called CS1, based on a combination of clusters rather than selecting a single optimal cluster as in the traditional MK1 measure. For hierarchical clustering, we present an algorithm to compute CS1, defined by seeking the optimal combinations of disjoint clusters obtained by cutting the hierarchical structure at a certain similarity level. By reformulating the optimization to a 01 linear fractional programming problem, we demonstrate that an exact solution can be obtained by a linear time algorithm. We further discuss how our approach can be generalized to more general problems involving overlapping clusters, and we show how optimal estimates can be obtained by greedy algorithms.
Themenfeld: Automatisches Klassifizieren