Literatur zur Informationserschließung
Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft
/
Powered by litecat, BIS Oldenburg
(Stand: 28. April 2022)
Suche
Suchergebnisse
Treffer 1–12 von 12
sortiert nach:
-
1Rasmussen, E.: Access models.
In: Interactive information seeking, behaviour and retrieval. Eds.: Ruthven, I. u. D. Kelly. London : Facet Publ., 2011. S.95-111.
-
2Kim, S. ; Rasmussen, E.: Characteristics of tissue-centric biomedical researchers using a survey and cluster analysis.
In: Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1210-1223.
Abstract: The objective of this study was to characterize the types of tissue-centric users based on tissue use, requirements, and their job or work-related variables at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA. A self-reporting questionnaire was distributed to biomedical researchers at the UPMC. Descriptive and cluster analyses were performed to identify and characterize the complex types of tissue-based researchers. A total of 62 respondents completed the survey, and two clusters were identified based on all variables. Two distinct groups of tissue-centric users made direct use of tissue samples for their research as well as associated information, while a third group of indirect users required only the associated information. The study shows that tissue-centric users were composed of various types. These types were distinguished in terms of tissue use and data requirements, as well as by their work or research-related activities.
-
3Choi, Y. ; Rasmussen, E.M.: Searching for images : the analysis of users' queries for image retrieval in American history.
In: Journal of the American Society for Information Science and technology. 54(2003) no.6, S.497-510.
Abstract: Choi and Rasmussen collect queries to the Library of Congress's American Memory photo archive from 48 scholars in American History by way of interviews and pre and post search questionnaires. Their interest is in the types of information need common in the visual domain, and the categories of terms most often used or indicated as appropriate for the description of image contents. Each search resulted in the provision of 20 items for evaluation by the searcher. Terms in queries and acceptable retrievals were categorized by a who, what, when, where faceted classification and queries into four needs categories; specific, general, abstract, and subjective. Two out of three analysts assigned all 38 requests into the same one of the four categories and in 19 cases all three agreed. General/nameable needs accounted for 60.5%, specific needs 26.3%, 7.9% for general/abstract, and 5.3% for subjective needs. The facet analysis indicated most content was of the form person/thing or event/condition limited by geography or time.
Themenfeld: Suchtaktik
Behandelte Form: Bilder
-
4Choi, Y. ; Rasmussen, E.M.: Users' relevance criteria in image retrieval in American history.
In: Information processing and management. 38(2002) no.5, S.695-726.
Abstract: A large number of digital images are available and accessible due to recent advances in technology. Since image retrieval systems are designed to meet user information needs, it seems apparent that image retrieval system design and implementation should take into account user-based aspects such as information use patterns and relevance judgments. However, little is known about what criteria users employ when making relevance judgments and which textual representations of the image help them make relevance judgments in their situational context. Thus, this study attempted to investigate the criteria which image users apply when making judgments about the relevance of an image. This research was built on prior work by Barry, Schamber and others which examined relevance criteria for textual and non-textual documents, exploring the extent to which these criteria apply to visual documents and the extent to which new and different criteria apply. Data were collected from unstructured interviews and questionnaires. Quantitative statistical methods were employed to analyze the importance of relevance criteria to see how much each criterion affected the user's judgments. The study involved 38 faculty and graduate students of American history in 1999 in a local setting, using the Library of Congress American memory photo archives. The study found that the user's perception of topicality was still the most important factor across the information-seeking stages. However, the users decided on retrieved items according to a variety of criteria other than topicality. Image quality and clarity was important. Users also searched for relevant images on the basis of title, date, subject descriptors, and notes provided. The conclusions of this study will be useful in image database design to assist users in conducting image searches. This study can be helpful to future relevance studies in information system design and evaluation.
Anmerkung: Beitrag in einem Themenheft: "Issues of context in information retrieval (IR)"
Wissenschaftsfach: Geschichtswissenschaft
Behandelte Form: Bilder
-
5Rasmussen, E.M.: Indexing and retrieval for the Web.
In: Annual review of information science and technology. 37(2003), S.91-126.
Abstract: The introduction and growth of the World Wide Web (WWW, or Web) have resulted in a profound change in the way individuals and organizations access information. In terms of volume, nature, and accessibility, the characteristics of electronic information are significantly different from those of even five or six years ago. Control of, and access to, this flood of information rely heavily an automated techniques for indexing and retrieval. According to Gudivada, Raghavan, Grosky, and Kasanagottu (1997, p. 58), "The ability to search and retrieve information from the Web efficiently and effectively is an enabling technology for realizing its full potential." Almost 93 percent of those surveyed consider the Web an "indispensable" Internet technology, second only to e-mail (Graphie, Visualization & Usability Center, 1998). Although there are other ways of locating information an the Web (browsing or following directory structures), 85 percent of users identify Web pages by means of a search engine (Graphie, Visualization & Usability Center, 1998). A more recent study conducted by the Stanford Institute for the Quantitative Study of Society confirms the finding that searching for information is second only to e-mail as an Internet activity (Nie & Ebring, 2000, online). In fact, Nie and Ebring conclude, "... the Internet today is a giant public library with a decidedly commercial tilt. The most widespread use of the Internet today is as an information search utility for products, travel, hobbies, and general information. Virtually all users interviewed responded that they engaged in one or more of these information gathering activities." ; Techniques for automated indexing and information retrieval (IR) have been developed, tested, and refined over the past 40 years, and are well documented (see, for example, Agosti & Smeaton, 1996; BaezaYates & Ribeiro-Neto, 1999a; Frakes & Baeza-Yates, 1992; Korfhage, 1997; Salton, 1989; Witten, Moffat, & Bell, 1999). With the introduction of the Web, and the capability to index and retrieve via search engines, these techniques have been extended to a new environment. They have been adopted, altered, and in some Gases extended to include new methods. "In short, search engines are indispensable for searching the Web, they employ a variety of relatively advanced IR techniques, and there are some peculiar aspects of search engines that make searching the Web different than more conventional information retrieval" (Gordon & Pathak, 1999, p. 145). The environment for information retrieval an the World Wide Web differs from that of "conventional" information retrieval in a number of fundamental ways. The collection is very large and changes continuously, with pages being added, deleted, and altered. Wide variability between the size, structure, focus, quality, and usefulness of documents makes Web documents much more heterogeneous than a typical electronic document collection. The wide variety of document types includes images, video, audio, and scripts, as well as many different document languages. Duplication of documents and sites is common. Documents are interconnected through networks of hyperlinks. Because of the size and dynamic nature of the Web, preprocessing all documents requires considerable resources and is often not feasible, certainly not an the frequent basis required to ensure currency. Query length is usually much shorter than in other environments-only a few words-and user behavior differs from that in other environments. These differences make the Web a novel environment for information retrieval (Baeza-Yates & Ribeiro-Neto, 1999b; Bharat & Henzinger, 1998; Huang, 2000).
Themenfeld: Literaturübersicht ; Internet ; Automatisches Indexieren
-
6Rasmussen, E.: In memoriam : Robert R. Korfhage.
In: Journal of the American Society for Information Science. 50(1999) no.4, S.288.
Themenfeld: Biographische Darstellungen
-
7Chen, H.-l. ; Rasmussen, E.M.: Intellectual access to images.
In: Library trends. 48(1999) no.2, S.291-302.
Abstract: Convenient image capture techniques, inexpensive storage, and widely available dissemination methods have made digital images a convenient and easily available information format. This increased availability of images is accompanied by a need for solutions to the problems inherent in indexing them for retrieval. Unfortunately, to date, very little information has been available on why users search for images, how they intend to use them, as well as how they pose their queries, though this situation is being remedied as a body of research begins to accumulate. New image indexing methods are also being explored. Traditional concept-based indexing uses controlled vocabulary or natural language to express what an image is or what it is about. Newly developed content-based techniques rely on a pixel-level interpretation of the data content of the image. Concept-based indexing has the advantage of providing a higher-level analysis of the image content but is expensive to implement and suffers from a lack of interindexer consistency due to the subjective nature of image interpretation. Content-based indexing is relatively inexpensive to implement but provides a relatively low level of interpretation of the image except in fairly narrow and applied domains. To date, very little is known about the usefulness of the access provided by content-based systems, and more work needs to be done on user needs and satisfaction with these systems. An examination of a number of image database systems shows the range of techniques that have been used to provide intellectual access to image collections.
Behandelte Form: Bilder
-
8Rasmussen, E.M.: Indexing images.
In: Annual review of information science and technology. 32(1997), S.169-196.
Abstract: State of the art review of methods available for accessing collections of digital images by means of manual and automatic indexing. Distinguishes between concept based indexing, in which images and the objects represented, are manually identified and described in terms of what they are and represent, and content based indexing, in which features of images (such as colours) are automatically identified and extracted. The main discussion is arranged in 6 sections: studies of image systems and their use; approaches to indexing images; image attributes; concept based indexing; content based indexing; and browsing in image retrieval. The performance of current image retrieval systems is largely untested and they still lack an extensive history and tradition of evaluation and standards for assessing performance. Concludes that there is a significant amount of research to be done before image retrieval systems can reach the state of development of text retrieval systems
Themenfeld: Literaturübersicht
Behandelte Form: Bilder
-
9Beaulieu, M. ; Robertson, S. ; Rasmussen, E.: Evaluating interactive systems in TREC.
In: Journal of the American Society for Information Science. 47(1996) no.1, S.85-94.
Abstract: The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
Themenfeld: Retrievalstudien
Objekt: TREC
-
10McLean, S. ; Spring, M.B. ; Rasmussen, E. ; Williams, J.G.: Online image databases : usabiblity and performance.
In: Electronic library. 13(1995) no.1, S.27-41.
Abstract: The Promenade image retrieval system us described in terms of its design, development and architecture. Design, development and implementation issues are discussed in terms of efficiency and effectiveness. A preliminary usability study is presented and the data resulting from the preliminary study are analysed and discussed. Efficiency in terms of response time due to network delays, database processing, application processing and image characteristcs and display is discussed. Response time results frome 40 queries made to the image database are presented and discussed. The results of theses studies demonstrate where improvements in the system need to be made in order ro improve usability and response time
Themenfeld: Dokumentenmanagement
Behandelte Form: Bilder
-
11Rasmussen, E.M.: Parallel information processing.
In: Annual review of information science and technology. 27(1992), S.99-130. Medford, NJ : Learned Information, 1992.
Abstract: Focuses on the application of parallel processing for the processing of text, primarily documents and document surrogates. Research on parallel processing of text has developed in 2 areas: a hardware approach involving the development of special purpose machines for text processing; and a software approach in which data structures and algorithms are developed for text searching using general purpose parallel processors
Themenfeld: Literaturübersicht
-
12Rasmussen, E.: Clustering algorithms.
In: Information retrieval: data structures and algorithms. Ed.: W.B. Frakes u. R. Baeza-Yates. Englewood Cliffs, NJ : Prentice Hall, 1992. S.419-442.
Abstract: Cluster analysis is a technique for multivariate analysis that assigns items to automatically created groups based on a calculation of the degree of association between items and groups. In the information retrieval field, cluster analysis has been used to create groups of documents with the goal of improving the effenciency and effectiveness of retrieval, or to determine the structure of the literature of a field. The terms in a document collection can also be clustered to show their relationships. The two main types of cluster analysis methods are the nonhierarchical, which divide a data set of N items into M clusters, and the hierarchical, which produce a nested data set in which pairs of items or clusters are successively linked. The nonhierarchical methods such as the single pass and reallocation methods are heuristic in nature and require less computation than the hierarchical methods. However, the hierarchical methods have usually been preferred for cluster-based document retrieval. The commonly used hierarchical methods, such as single link, complete link, group average link, and Ward's method, have high space and time requirements. In order to cluster the large data sets with high dimensionality that are typically found in IR applications, good algorithms (ideally O(N**2) time, O(N) space) must be found. Examples are the SLINK and minimal spanning tree algorithms for the single link method, the Voorhees algorithm for group average linlk, and the reciprocal nearest neighbor algorithm for Ward's method