Search (33 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  1. Kurtz, M.; Bollen, J.: Usage bibliometrics (2010) 0.09
    0.08763343 = product of:
      0.2629003 = sum of:
        0.2629003 = weight(_text_:usage in 4206) [ClassicSimilarity], result of:
          0.2629003 = score(doc=4206,freq=2.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.9759876 = fieldWeight in 4206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.125 = fieldNorm(doc=4206)
      0.33333334 = coord(1/3)
    
  2. Corbett, L.E.: Serials: review of the literature 2000-2003 (2006) 0.07
    0.065788105 = product of:
      0.09868215 = sum of:
        0.082156345 = weight(_text_:usage in 1088) [ClassicSimilarity], result of:
          0.082156345 = score(doc=1088,freq=2.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.30499613 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.016525801 = product of:
          0.033051603 = sum of:
            0.033051603 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.033051603 = score(doc=1088,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.19345059 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1088)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The topic of electronic journals (e-journals) dominated the serials literature from 2000 to 2003. This review is limited to the events and issues within the broad topics of cost, management, and archiving. Coverage of cost includes such initiatives as PEAK, JACC, BioMed Central, SPARC, open access, the "Big Deal," and "going e-only." Librarians combated the continued price increase trend for journals, fueled in part by publisher mergers, with the economies found with bundled packages and consortial subscriptions. Serials management topics include usage statistics; core title lists; staffing needs; the "A-Z list" and other services from such companies as Serials Solutions; "deep linking"; link resolvers such as SFX; development of standards or guidelines, such as COUNTER and ERMI; tracking of license terms; vendor mergers; and the demise of integrated library systems and a subscription agent's bankruptcy. Librarians archived print volumes in storage facilities due to space shortages. Librarians and publishers struggled with electronic archiving concepts, discussing questions of who, where, and how. Projects such as LOCKSS tested potential solutions, but missing online content due to the Tasini court case and retractions posed more archiving difficulties. The serials literature captured much of the upheaval resulting from the rapid pace of changes, many linked to the advent of e-journals.
    Date
    10. 9.2000 17:38:22
  3. Dumais, S.T.: Latent semantic analysis (2003) 0.06
    0.063636646 = product of:
      0.09545496 = sum of:
        0.069711976 = weight(_text_:usage in 2462) [ClassicSimilarity], result of:
          0.069711976 = score(doc=2462,freq=4.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.25879782 = fieldWeight in 2462, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
        0.025742982 = product of:
          0.051485963 = sum of:
            0.051485963 = weight(_text_:mining in 2462) [ClassicSimilarity], result of:
              0.051485963 = score(doc=2462,freq=2.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.18702249 = fieldWeight in 2462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2462)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.
    A number of approaches have been developed in information retrieval to address the problems caused by the variability in word usage. Stemming is a popular technique used to normalize some kinds of surface-level variability by converting words to their morphological root. For example, the words "retrieve," "retrieval," "retrieved," and "retrieving" would all be converted to their root form, "retrieve." The root form is used for both document and query processing. Stemming sometimes helps retrieval, although not much (Harman, 1991; Hull, 1996). And, it does not address Gases where related words are not morphologically related (e.g., physician and doctor). Controlled vocabularies have also been used to limit variability by requiring that query and index terms belong to a pre-defined set of terms. Documents are indexed by a specified or authorized list of subject headings or index terms, called the controlled vocabulary. Library of Congress Subject Headings, Medical Subject Headings, Association for Computing Machinery (ACM) keywords, and Yellow Pages headings are examples of controlled vocabularies. If searchers can find the right controlled vocabulary terms, they do not have to think of all the morphologically related or synonymous terms that authors might have used. However, assigning controlled vocabulary terms in a consistent and thorough manner is a time-consuming and usually manual process. A good deal of research has been published about the effectiveness of controlled vocabulary indexing compared to full text indexing (e.g., Bates, 1998; Lancaster, 1986; Svenonius, 1986). The combination of both full text and controlled vocabularies is often better than either alone, although the size of the advantage is variable (Lancaster, 1986; Markey, Atherton, & Newton, 1982; Srinivasan, 1996). Richer thesauri have also been used to provide synonyms, generalizations, and specializations of users' search terms (see Srinivasan, 1992, for a review). Controlled vocabularies and thesaurus entries can be generated either manually or by the automatic analysis of large collections of texts.
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
  4. Blake, C.: Text mining (2011) 0.06
    0.056631677 = product of:
      0.16989502 = sum of:
        0.16989502 = product of:
          0.33979005 = sum of:
            0.33979005 = weight(_text_:mining in 1599) [ClassicSimilarity], result of:
              0.33979005 = score(doc=1599,freq=4.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                1.2342855 = fieldWeight in 1599, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1599)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Data Mining
  5. Benoit, G.: Data mining (2002) 0.05
    0.045406356 = product of:
      0.13621907 = sum of:
        0.13621907 = product of:
          0.27243814 = sum of:
            0.27243814 = weight(_text_:mining in 4296) [ClassicSimilarity], result of:
              0.27243814 = score(doc=4296,freq=14.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.9896301 = fieldWeight in 4296, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4296)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Theme
    Data Mining
  6. Trybula, W.J.: Data mining and knowledge discovery (1997) 0.04
    0.04004464 = product of:
      0.120133914 = sum of:
        0.120133914 = product of:
          0.24026783 = sum of:
            0.24026783 = weight(_text_:mining in 2300) [ClassicSimilarity], result of:
              0.24026783 = score(doc=2300,freq=8.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.8727716 = fieldWeight in 2300, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review of the recently developed concepts of data mining (defined as the automated process of evaluating data and finding relationships) and knowledge discovery (defined as the automated process of extracting information, especially unpredicted relationships or previously unknown patterns among the data) with particular reference to numerical data. Includes: the knowledge acquisition process; data mining; evaluation methods; and knowledge discovery. Concludes that existing work in the field are confusing because the terminology is inconsistent and poorly defined. Although methods are available for analyzing and cleaning databases, better coordinated efforts should be directed toward providing users with improved means of structuring search mechanisms to explore the data for relationships
    Theme
    Data Mining
  7. Bath, P.A.: Data mining in health and medical information (2003) 0.04
    0.03963392 = product of:
      0.11890175 = sum of:
        0.11890175 = product of:
          0.2378035 = sum of:
            0.2378035 = weight(_text_:mining in 4263) [ClassicSimilarity], result of:
              0.2378035 = score(doc=4263,freq=6.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.86381996 = fieldWeight in 4263, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Theme
    Data Mining
  8. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.02
    0.024270717 = product of:
      0.07281215 = sum of:
        0.07281215 = product of:
          0.1456243 = sum of:
            0.1456243 = weight(_text_:mining in 4242) [ClassicSimilarity], result of:
              0.1456243 = score(doc=4242,freq=4.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.5289795 = fieldWeight in 4242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Data Mining
  9. Yang, K.: Information retrieval on the Web (2004) 0.02
    0.021908358 = product of:
      0.06572507 = sum of:
        0.06572507 = weight(_text_:usage in 4278) [ClassicSimilarity], result of:
          0.06572507 = score(doc=4278,freq=2.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.2439969 = fieldWeight in 4278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4278)
      0.33333334 = coord(1/3)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  10. Enser, P.G.B.: Visual image retrieval (2008) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.10576513 = score(doc=3281,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2012 13:01:26
  11. Morris, S.A.: Mapping research specialties (2008) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.10576513 = score(doc=3962,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 9:30:22
  12. Fallis, D.: Social epistemology and information science (2006) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.10576513 = score(doc=4368,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:22:28
  13. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.10576513 = score(doc=6091,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:53:22
  14. Metz, A.: Community service : a bibliography (1996) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
              0.10576513 = score(doc=5341,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 5341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5341)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17.10.1996 14:22:33
  15. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.10576513 = score(doc=334,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  16. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
              0.10576513 = score(doc=335,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=335)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  17. Warner, A.J.: Natural language processing (1987) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10576513 = score(doc=337,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  18. Grudin, J.: Human-computer interaction (2011) 0.02
    0.01542408 = product of:
      0.04627224 = sum of:
        0.04627224 = product of:
          0.09254448 = sum of:
            0.09254448 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
              0.09254448 = score(doc=1601,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.5416616 = fieldWeight in 1601, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1601)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27.12.2014 18:54:22
  19. Rader, H.B.: Library orientation and instruction - 1993 (1994) 0.01
    0.0110172015 = product of:
      0.033051603 = sum of:
        0.033051603 = product of:
          0.066103205 = sum of:
            0.066103205 = weight(_text_:22 in 209) [ClassicSimilarity], result of:
              0.066103205 = score(doc=209,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.38690117 = fieldWeight in 209, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=209)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Reference services review. 22(1994) no.4, S.81-
  20. Hsueh, D.C.: Recon road maps : retrospective conversion literature, 1980-1990 (1992) 0.01
    0.008813761 = product of:
      0.026441282 = sum of:
        0.026441282 = product of:
          0.052882563 = sum of:
            0.052882563 = weight(_text_:22 in 2193) [ClassicSimilarity], result of:
              0.052882563 = score(doc=2193,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.30952093 = fieldWeight in 2193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2193)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 14(1992) nos.3/4, S.5-22