Search (32 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  1. Enser, P.G.B.: Visual image retrieval (2008) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.0972076 = score(doc=3281,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:01:26
  2. Morris, S.A.: Mapping research specialties (2008) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.0972076 = score(doc=3962,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 9:30:22
  3. Fallis, D.: Social epistemology and information science (2006) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.0972076 = score(doc=4368,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:22:28
  4. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.0972076 = score(doc=6091,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  5. Metz, A.: Community service : a bibliography (1996) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
              0.0972076 = score(doc=5341,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 5341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17.10.1996 14:22:33
  6. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.0972076 = score(doc=334,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  7. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
              0.0972076 = score(doc=335,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=335)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  8. Warner, A.J.: Natural language processing (1987) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.0972076 = score(doc=337,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Rogers, Y.: New theoretical approaches for human-computer interaction (2003) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 4270) [ClassicSimilarity], result of:
              0.09438516 = score(doc=4270,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 4270, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Theory weary, theory leery, why can't I be theory cheery?" (Erickson, 2002, p. 269). The field of human-computer interaction (HCI) is rapidly expanding. Alongside the extensive technological developments that are taking place, a profusion of new theories, methods, and concerns has been imported into the field from a range of disciplines and contexts. An extensive critique of recent theoretical developments is presented here together with an overview of HCI practice. A consequence of bringing new theories into the field has been much insightful explication of HCI phenomena and also a broadening of the field's discourse. However, these theoretically based approaches have had limited impact an the practice of interaction design. This chapter discusses why this is so and suggests that different kinds of mechanisms are needed that will enable both designers and researchers to better articulate and theoretically ground the challenges facing them today. Human-computer interaction is bursting at the seams. Its mission, goals, and methods, well established in the '80s, have all greatly expanded to the point that "HCI is now effectively a boundless domain" (Barnard, May, Duke, & Duce, 2000, p. 221). Everything is in a state of flux: The theory driving research is changing, a flurry of new concepts is emerging, the domains and type of users being studied are diversifying, many of the ways of doing design are new, and much of what is being designed is significantly different. Although potentially much is to be gained from such rapid growth, the downside is an increasing lack of direction, structure, and coherence in the field. What was originally a bounded problem space with a clear focus and a small set of methods for designing computer systems that were easier and more efficient to use by a single user is now turning into a diffuse problem space with less clarity in terms of its objects of study, design foci, and investigative methods. Instead, aspirations of overcoming the Digital Divide, by providing universal accessibility, have become major concerns (e.g., Shneiderman, 2002a). The move toward greater openness in the field means that many more topics, areas, and approaches are now considered acceptable in the worlds of research and practice.
    A problem with allowing a field to expand eclectically is that it can easily lose coherence. No one really knows what its purpose is anymore or what criteria to use in assessing its contribution and value to both knowledge and practice. For example, among the many new approaches, ideas, methods, and goals now being proposed, how do we know which are acceptable, reliable, useful, and generalizable? Moreover, how do researchers and designers know which of the many tools and techniques to use when doing design and research? To be able to address these concerns, a young field in a state of flux (as is HCI) needs to take stock and begin to reflect an the changes that are happening. The purpose of this chapter is to assess and reflect an the role of theory in contemporary HCI and the extent to which it is used in design practice. Over the last ten years, a range of new theories has been imported into the field. A key question is whether such attempts have been productive in terms of "knowledge transfer." Here knowledge transfer means the translation of research findings (e.g., theory, empirical results, descriptive accounts, cognitive models) from one discipline (e.g., cognitive psychology, sociology) into another (e.g., human-computer interaction, computer supported cooperative work).
  10. Dumais, S.T.: Latent semantic analysis (2003) 0.02
    0.022612676 = product of:
      0.045225352 = sum of:
        0.045225352 = product of:
          0.090450704 = sum of:
            0.090450704 = weight(_text_:e.g in 2462) [ClassicSimilarity], result of:
              0.090450704 = score(doc=2462,freq=10.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.3866509 = fieldWeight in 2462, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2462)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.
    A number of approaches have been developed in information retrieval to address the problems caused by the variability in word usage. Stemming is a popular technique used to normalize some kinds of surface-level variability by converting words to their morphological root. For example, the words "retrieve," "retrieval," "retrieved," and "retrieving" would all be converted to their root form, "retrieve." The root form is used for both document and query processing. Stemming sometimes helps retrieval, although not much (Harman, 1991; Hull, 1996). And, it does not address Gases where related words are not morphologically related (e.g., physician and doctor). Controlled vocabularies have also been used to limit variability by requiring that query and index terms belong to a pre-defined set of terms. Documents are indexed by a specified or authorized list of subject headings or index terms, called the controlled vocabulary. Library of Congress Subject Headings, Medical Subject Headings, Association for Computing Machinery (ACM) keywords, and Yellow Pages headings are examples of controlled vocabularies. If searchers can find the right controlled vocabulary terms, they do not have to think of all the morphologically related or synonymous terms that authors might have used. However, assigning controlled vocabulary terms in a consistent and thorough manner is a time-consuming and usually manual process. A good deal of research has been published about the effectiveness of controlled vocabulary indexing compared to full text indexing (e.g., Bates, 1998; Lancaster, 1986; Svenonius, 1986). The combination of both full text and controlled vocabularies is often better than either alone, although the size of the advantage is variable (Lancaster, 1986; Markey, Atherton, & Newton, 1982; Srinivasan, 1996). Richer thesauri have also been used to provide synonyms, generalizations, and specializations of users' search terms (see Srinivasan, 1992, for a review). Controlled vocabularies and thesaurus entries can be generated either manually or by the automatic analysis of large collections of texts.
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
  11. Grudin, J.: Human-computer interaction (2011) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
              0.08505665 = score(doc=1601,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 1601, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1601)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27.12.2014 18:54:22
  12. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 4279) [ClassicSimilarity], result of:
              0.06741798 = score(doc=4279,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 4279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  13. Zhu, B.; Chen, H.: Information visualization (2004) 0.02
    0.016685098 = product of:
      0.033370197 = sum of:
        0.033370197 = product of:
          0.06674039 = sum of:
            0.06674039 = weight(_text_:e.g in 4276) [ClassicSimilarity], result of:
              0.06674039 = score(doc=4276,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28529608 = fieldWeight in 4276, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
  14. Rader, H.B.: Library orientation and instruction - 1993 (1994) 0.02
    0.0151886875 = product of:
      0.030377375 = sum of:
        0.030377375 = product of:
          0.06075475 = sum of:
            0.06075475 = weight(_text_:22 in 209) [ClassicSimilarity], result of:
              0.06075475 = score(doc=209,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.38690117 = fieldWeight in 209, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=209)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Reference services review. 22(1994) no.4, S.81-
  15. Yang, K.: Information retrieval on the Web (2004) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 4278) [ClassicSimilarity], result of:
              0.053934377 = score(doc=4278,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 4278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4278)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  16. Fox, E.A.; Urs, S.R.: Digital libraries (2002) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 4299) [ClassicSimilarity], result of:
              0.053934377 = score(doc=4299,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 4299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The emergence of digital libraries (DLs), at the interface of library and information science with computer and communication technologies, helped to expand significantly the literature in all of these areas during the late 1990s. The pace of development is reflected by the number of special issues of major journals in information science and computer science, and the increasing number of workshops and conferences an digital libraries. For example, starting in 1995, the Communications of the ACM has devoted three special issues to the topic (Fox, Akscyn, Furuta, & Leggett, 1995; Fox & Marchionini, 1998, 2001). The Journal of the American Society for Information Science devoted two issues to digital libraries (H. Chen, 2000; Fox & Lunin, 1993); Information Processing & Management and the Journal of Visual Communication and Image Representation each had one special issue (Chen & Fox, 1996; Marchionini & Fox, 1999). The domain of digital libraries, though still evolving, has matured over the last decade, as demonstrated by coverage through D-Lib (http://www.dlib.org), the International Journal an Digital Libraries (http://link.springer.de/link/service/journals/00799), and two overview works (W Y Arms, 2000; Lesk, 1997; both of which have also served as textbooks). Sun Microsystems published a small book to guide those planning a digital library (Noerr, 2000), and IBM has been developing commercial products for digital libraries since 1994 (IBM, 2000). A number of Web sites have extensive sets of pointers to information an DLs (D-Lib Forum, 2001; Fox, 1998a; Habing, 1998; Hein, 2000; Schwartz, 2001a, 2001b). Further, the field has attracted the attention of diverse academics, research groups, and practitionersmany of whom have attended tutorials, workshops, or conferences, e.g., the Joint Conference an Digital Libraries, which is a sequel to a separate series run by ACM and IEEE-CS. Therefore, it is timely that ARIST publishes this first review focusing specifically an digital libraries. There has been no ARIST chapter to date directly dealing with the area of DLs, though some related domains have been covered-particularly: information retrieval, user interfaces (Marchionini & Komlodi, 1998), social informatics of DLs (Bishop & Star, 1996), and scholarly communication (see Borgman and Furner's chapter in this volume). This chapter provides an overview of the diverse aspects and dimensions of DL research, practice, and literature, identifying trends and delineating research directions.
  17. Saracevic, T.: Relevance: a review of the literature and a framework for thinking on the notion in information science. Part II : nature and manifestations of relevance (2007) 0.01
    0.013483594 = product of:
      0.026967188 = sum of:
        0.026967188 = product of:
          0.053934377 = sum of:
            0.053934377 = weight(_text_:e.g in 612) [ClassicSimilarity], result of:
              0.053934377 = score(doc=612,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.23055404 = fieldWeight in 612, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.03125 = fieldNorm(doc=612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Relevant: Having significant and demonstrable bearing on the matter at hand.[Note *][A version of this article has been published in 2006 as a chapter in E.G. Abels & D.A. Nitecki (Eds.), Advances in Librarianship (Vol. 30, pp. 3-71). San Diego: Academic Press. (Saracevic, 2006).] Relevance: The ability as of an information retrieval system to retrieve material that satisfies the needs of the user. - Merriam-Webster Dictionary 2005
  18. Hsueh, D.C.: Recon road maps : retrospective conversion literature, 1980-1990 (1992) 0.01
    0.01215095 = product of:
      0.0243019 = sum of:
        0.0243019 = product of:
          0.0486038 = sum of:
            0.0486038 = weight(_text_:22 in 2193) [ClassicSimilarity], result of:
              0.0486038 = score(doc=2193,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.30952093 = fieldWeight in 2193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2193)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 14(1992) nos.3/4, S.5-22
  19. Gabbard, R.: Recent literature shows accelerated growth in hypermedia tools : an annotated bibliography (1994) 0.01
    0.01215095 = product of:
      0.0243019 = sum of:
        0.0243019 = product of:
          0.0486038 = sum of:
            0.0486038 = weight(_text_:22 in 8460) [ClassicSimilarity], result of:
              0.0486038 = score(doc=8460,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.30952093 = fieldWeight in 8460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8460)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Reference services review. 22(1994) no.2, S.31-40
  20. Buckland, M.K.; Liu, Z.: History of information science (1995) 0.01
    0.01215095 = product of:
      0.0243019 = sum of:
        0.0243019 = product of:
          0.0486038 = sum of:
            0.0486038 = weight(_text_:22 in 4226) [ClassicSimilarity], result of:
              0.0486038 = score(doc=4226,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.30952093 = fieldWeight in 4226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4226)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 6.1996 19:22:20