Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 15. Juni 2019)
1Frei, H.P. ; Stieger, D.: ¬The use of semantic links in hypertext information retrieval.
In: Information processing and management. 31(1995) no.1, S.1-13.
Abstract: It is shown how the semantic content of hypertext links can be used for retrieval purposes. Indexing and retrieval algorithms are presented that exploit the link content in addition to the content of the nodes. The results of some retrieval experiments in a hypertext test collection are presented
2Qiu, Y. ; Frei, H.P.: Concept based query expansion.
In: SIGIR forum. 1993, Special issue, S.160-169.
Abstract: Presentation of a probabilistic query expansion based on an automatically constructed similarity thesaurus. Such thesauri reflect the domain knowledge of their origin collection
Themenfeld: Semantisches Umfeld in Indexierung u. Retrieval
3Frei, H.P. ; Meienberg, S.: Evaluating weighted search terms as Boolean queries.
In: Information retrieval: GI/GMD-Workshop, Darmstadt, 23.-24.6.1991: Proceedings. Ed.: N. Fuhr. Berlin : Springer, 1991. S.11-22.
Abstract: A method for automatically generating Boolean queries from weighted search terms is described. The method is intended to exploit traditional Boolean information retrieval systems in a more user-friendly way. The principal advantage of the method we present is that its computational complexity is low in comparison to many other known methods
4Frei, H.P. ; Meienberg, S. ; Schäuble, P.: ¬The perils of interpreting recall and precision values.
In: Information retrieval: GI/GMD-Workshop, Darmstadt, 23.-24.6.1991: Proceedings. Ed.: N. Fuhr. Berlin : Springer, 1991. S.1-10.
Abstract: The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
5Frei, H.P. ; Schäuble, P.: Determining the effectiveness of retrieval algorithms.
In: Information processing and management. 27(1991) no.2/3, S.153-164.
Abstract: A new effectiveness measure ('usefulness measure') is proposed to circumvent the problems associated with the classical recall and precision measures. It is difficult to evaluate systems that filter extremly dynamic information; the determination of all relevant dodcuments in a real life collection is hardly affordable, and the specification of binary relevance assessments is often problematic. The new measure relies on an statistical approach with which two retrieval algorithms are compared. In contrast to the classical recall and precision measures, the new measure requires only relative judgments, and the reply of the retrieval system os compared directly with the information need of the user rather than with the query. The new measure has the added ability to determine an error probability that indicates haw stable the usefulness measure is. Using a test collection of abstracts from CACM, it is shown that our new measure is also capable of disclosing the effect of manually assigned descriptors and yields a results similar to that of the traditional recall and precision measures.