Search (17 results, page 1 of 1)

  • × theme_ss:"Literaturübersicht"
  • × year_i:[2000 TO 2010}
  1. Corbett, L.E.: Serials: review of the literature 2000-2003 (2006) 0.07
    0.065788105 = product of:
      0.09868215 = sum of:
        0.082156345 = weight(_text_:usage in 1088) [ClassicSimilarity], result of:
          0.082156345 = score(doc=1088,freq=2.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.30499613 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.016525801 = product of:
          0.033051603 = sum of:
            0.033051603 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.033051603 = score(doc=1088,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.19345059 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1088)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The topic of electronic journals (e-journals) dominated the serials literature from 2000 to 2003. This review is limited to the events and issues within the broad topics of cost, management, and archiving. Coverage of cost includes such initiatives as PEAK, JACC, BioMed Central, SPARC, open access, the "Big Deal," and "going e-only." Librarians combated the continued price increase trend for journals, fueled in part by publisher mergers, with the economies found with bundled packages and consortial subscriptions. Serials management topics include usage statistics; core title lists; staffing needs; the "A-Z list" and other services from such companies as Serials Solutions; "deep linking"; link resolvers such as SFX; development of standards or guidelines, such as COUNTER and ERMI; tracking of license terms; vendor mergers; and the demise of integrated library systems and a subscription agent's bankruptcy. Librarians archived print volumes in storage facilities due to space shortages. Librarians and publishers struggled with electronic archiving concepts, discussing questions of who, where, and how. Projects such as LOCKSS tested potential solutions, but missing online content due to the Tasini court case and retractions posed more archiving difficulties. The serials literature captured much of the upheaval resulting from the rapid pace of changes, many linked to the advent of e-journals.
    Date
    10. 9.2000 17:38:22
  2. Dumais, S.T.: Latent semantic analysis (2003) 0.06
    0.063636646 = product of:
      0.09545496 = sum of:
        0.069711976 = weight(_text_:usage in 2462) [ClassicSimilarity], result of:
          0.069711976 = score(doc=2462,freq=4.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.25879782 = fieldWeight in 2462, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
        0.025742982 = product of:
          0.051485963 = sum of:
            0.051485963 = weight(_text_:mining in 2462) [ClassicSimilarity], result of:
              0.051485963 = score(doc=2462,freq=2.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.18702249 = fieldWeight in 2462, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2462)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.
    A number of approaches have been developed in information retrieval to address the problems caused by the variability in word usage. Stemming is a popular technique used to normalize some kinds of surface-level variability by converting words to their morphological root. For example, the words "retrieve," "retrieval," "retrieved," and "retrieving" would all be converted to their root form, "retrieve." The root form is used for both document and query processing. Stemming sometimes helps retrieval, although not much (Harman, 1991; Hull, 1996). And, it does not address Gases where related words are not morphologically related (e.g., physician and doctor). Controlled vocabularies have also been used to limit variability by requiring that query and index terms belong to a pre-defined set of terms. Documents are indexed by a specified or authorized list of subject headings or index terms, called the controlled vocabulary. Library of Congress Subject Headings, Medical Subject Headings, Association for Computing Machinery (ACM) keywords, and Yellow Pages headings are examples of controlled vocabularies. If searchers can find the right controlled vocabulary terms, they do not have to think of all the morphologically related or synonymous terms that authors might have used. However, assigning controlled vocabulary terms in a consistent and thorough manner is a time-consuming and usually manual process. A good deal of research has been published about the effectiveness of controlled vocabulary indexing compared to full text indexing (e.g., Bates, 1998; Lancaster, 1986; Svenonius, 1986). The combination of both full text and controlled vocabularies is often better than either alone, although the size of the advantage is variable (Lancaster, 1986; Markey, Atherton, & Newton, 1982; Srinivasan, 1996). Richer thesauri have also been used to provide synonyms, generalizations, and specializations of users' search terms (see Srinivasan, 1992, for a review). Controlled vocabularies and thesaurus entries can be generated either manually or by the automatic analysis of large collections of texts.
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
  3. Benoit, G.: Data mining (2002) 0.05
    0.045406356 = product of:
      0.13621907 = sum of:
        0.13621907 = product of:
          0.27243814 = sum of:
            0.27243814 = weight(_text_:mining in 4296) [ClassicSimilarity], result of:
              0.27243814 = score(doc=4296,freq=14.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.9896301 = fieldWeight in 4296, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4296)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Theme
    Data Mining
  4. Bath, P.A.: Data mining in health and medical information (2003) 0.04
    0.03963392 = product of:
      0.11890175 = sum of:
        0.11890175 = product of:
          0.2378035 = sum of:
            0.2378035 = weight(_text_:mining in 4263) [ClassicSimilarity], result of:
              0.2378035 = score(doc=4263,freq=6.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.86381996 = fieldWeight in 4263, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Theme
    Data Mining
  5. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.02
    0.024270717 = product of:
      0.07281215 = sum of:
        0.07281215 = product of:
          0.1456243 = sum of:
            0.1456243 = weight(_text_:mining in 4242) [ClassicSimilarity], result of:
              0.1456243 = score(doc=4242,freq=4.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.5289795 = fieldWeight in 4242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Data Mining
  6. Yang, K.: Information retrieval on the Web (2004) 0.02
    0.021908358 = product of:
      0.06572507 = sum of:
        0.06572507 = weight(_text_:usage in 4278) [ClassicSimilarity], result of:
          0.06572507 = score(doc=4278,freq=2.0), product of:
            0.26936847 = queryWeight, product of:
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.04878962 = queryNorm
            0.2439969 = fieldWeight in 4278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.52102 = idf(docFreq=480, maxDocs=44218)
              0.03125 = fieldNorm(doc=4278)
      0.33333334 = coord(1/3)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  7. Enser, P.G.B.: Visual image retrieval (2008) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.10576513 = score(doc=3281,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2012 13:01:26
  8. Morris, S.A.: Mapping research specialties (2008) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.10576513 = score(doc=3962,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 9:30:22
  9. Fallis, D.: Social epistemology and information science (2006) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.10576513 = score(doc=4368,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:22:28
  10. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.017627522 = product of:
      0.052882563 = sum of:
        0.052882563 = product of:
          0.10576513 = sum of:
            0.10576513 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.10576513 = score(doc=6091,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:53:22
  11. Kim, K.-S.: Recent work in cataloging and classification, 2000-2002 (2003) 0.01
    0.008813761 = product of:
      0.026441282 = sum of:
        0.026441282 = product of:
          0.052882563 = sum of:
            0.052882563 = weight(_text_:22 in 152) [ClassicSimilarity], result of:
              0.052882563 = score(doc=152,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.30952093 = fieldWeight in 152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=152)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22
  12. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.01
    0.008813761 = product of:
      0.026441282 = sum of:
        0.026441282 = product of:
          0.052882563 = sum of:
            0.052882563 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.052882563 = score(doc=249,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22
  13. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.01
    0.008813761 = product of:
      0.026441282 = sum of:
        0.026441282 = product of:
          0.052882563 = sum of:
            0.052882563 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.052882563 = score(doc=266,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.30952093 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22
  14. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.01
    0.008580994 = product of:
      0.025742982 = sum of:
        0.025742982 = product of:
          0.051485963 = sum of:
            0.051485963 = weight(_text_:mining in 1978) [ClassicSimilarity], result of:
              0.051485963 = score(doc=1978,freq=2.0), product of:
                0.2752929 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.04878962 = queryNorm
                0.18702249 = fieldWeight in 1978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1978)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
  15. Nielsen, M.L.: Thesaurus construction : key issues and selected readings (2004) 0.01
    0.00771204 = product of:
      0.02313612 = sum of:
        0.02313612 = product of:
          0.04627224 = sum of:
            0.04627224 = weight(_text_:22 in 5006) [ClassicSimilarity], result of:
              0.04627224 = score(doc=5006,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.2708308 = fieldWeight in 5006, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5006)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    18. 5.2006 20:06:22
  16. Weiss, A.K.; Carstens, T.V.: ¬The year's work in cataloging, 1999 (2001) 0.01
    0.00771204 = product of:
      0.02313612 = sum of:
        0.02313612 = product of:
          0.04627224 = sum of:
            0.04627224 = weight(_text_:22 in 6084) [ClassicSimilarity], result of:
              0.04627224 = score(doc=6084,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.2708308 = fieldWeight in 6084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6084)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22
  17. Genereux, C.: Building connections : a review of the serials literature 2004 through 2005 (2007) 0.01
    0.0066103204 = product of:
      0.01983096 = sum of:
        0.01983096 = product of:
          0.03966192 = sum of:
            0.03966192 = weight(_text_:22 in 2548) [ClassicSimilarity], result of:
              0.03966192 = score(doc=2548,freq=2.0), product of:
                0.17085294 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04878962 = queryNorm
                0.23214069 = fieldWeight in 2548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2000 17:38:22