Search (59 results, page 1 of 3)

  • × theme_ss:"Literaturübersicht"
  1. White, H.D.; McCain, K.W.: Visualization of literatures (1997) 0.04
    0.043642364 = product of:
      0.08728473 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 2291) [ClassicSimilarity], result of:
              0.03295912 = score(doc=2291,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 2291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2291)
          0.25 = coord(1/4)
        0.079044946 = weight(_text_:term in 2291) [ClassicSimilarity], result of:
          0.079044946 = score(doc=2291,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 2291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2291)
      0.5 = coord(2/4)
    
    Abstract
    State of the art review of recent models of literatures that offer visual clues to relationships among writings that are often based term occurences and co-occurences. Considers the advantages of 2 dimensional and 3 dimensional displays of relationships over other models; bibliographic models; editorial models; bibliometric models; user models; and synthetic models. Discusses the online visualization and offline visualizations and the problems of visualizing changing literatures in a static medium, such as hard copy print. Argues that insufficient attention has been paid to user friendly visual design with the related questions of new capabilities and scaling up to larger collections. Concludes with the hope that, in future, the same visualization interface used for bibliographic domain analysis will be used for document retrieval
  2. Yang, K.: Information retrieval on the Web (2004) 0.04
    0.035268355 = product of:
      0.07053671 = sum of:
        0.0066587473 = product of:
          0.02663499 = sum of:
            0.02663499 = weight(_text_:based in 4278) [ClassicSimilarity], result of:
              0.02663499 = score(doc=4278,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.18831211 = fieldWeight in 4278, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4278)
          0.25 = coord(1/4)
        0.06387796 = weight(_text_:term in 4278) [ClassicSimilarity], result of:
          0.06387796 = score(doc=4278,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.29162687 = fieldWeight in 4278, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4278)
      0.5 = coord(2/4)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  3. Sabourin, C.F. (Bearb.): Computational lexicology and lexicography : bibliography (1994) 0.03
    0.028230337 = product of:
      0.11292135 = sum of:
        0.11292135 = weight(_text_:term in 8871) [ClassicSimilarity], result of:
          0.11292135 = score(doc=8871,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5155283 = fieldWeight in 8871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.078125 = fieldNorm(doc=8871)
      0.25 = coord(1/4)
    
    Abstract
    The bibliography comprises altogether 5910 references on: dictionaries production (1380 refs.), thesauri (680), term banks (680), analysis dictionaries (1230), transfer dictionaries (140), generation dictionaries (60), lexical database / machine readable dictionaries (550) lexical semantics (780), lexical grammar (119) etc.
  4. Efthimiadis, E.N.: Query expansion (1996) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 4847) [ClassicSimilarity], result of:
          0.09033708 = score(doc=4847,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 4847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=4847)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of query expansion (or term expansion) as the process of supplementing the original query with additional terms in order to improve retrieval performance. Research in the subject is presented in a highly structured way and is presented according to 3 types of query expansion; manual query expansion; automatic query expansion; and interactive query expansion
  5. Liu, X.; Croft, W.B.: Statistical language modeling for information retrieval (2004) 0.02
    0.022482082 = product of:
      0.08992833 = sum of:
        0.08992833 = weight(_text_:frequency in 4277) [ClassicSimilarity], result of:
          0.08992833 = score(doc=4277,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.32531026 = fieldWeight in 4277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4277)
      0.25 = coord(1/4)
    
    Abstract
    This chapter reviews research and applications in statistical language modeling for information retrieval (IR), which has emerged within the past several years as a new probabilistic framework for describing information retrieval processes. Generally speaking, statistical language modeling, or more simply language modeling (LM), involves estimating a probability distribution that captures statistical regularities of natural language use. Applied to information retrieval, language modeling refers to the problem of estimating the likelihood that a query and a document could have been generated by the same language model, given the language model of the document either with or without a language model of the query. The roots of statistical language modeling date to the beginning of the twentieth century when Markov tried to model letter sequences in works of Russian literature (Manning & Schütze, 1999). Zipf (1929, 1932, 1949, 1965) studied the statistical properties of text and discovered that the frequency of works decays as a Power function of each works rank. However, it was Shannon's (1951) work that inspired later research in this area. In 1951, eager to explore the applications of his newly founded information theory to human language, Shannon used a prediction game involving n-grams to investigate the information content of English text. He evaluated n-gram models' performance by comparing their crossentropy an texts with the true entropy estimated using predictions made by human subjects. For many years, statistical language models have been used primarily for automatic speech recognition. Since 1980, when the first significant language model was proposed (Rosenfeld, 2000), statistical language modeling has become a fundamental component of speech recognition, machine translation, and spelling correction.
  6. Dewey, S.H.: Foucault's toolbox : use of Foucault's writings in LIS journal literature, 1990-2016 (2020) 0.02
    0.022482082 = product of:
      0.08992833 = sum of:
        0.08992833 = weight(_text_:frequency in 5841) [ClassicSimilarity], result of:
          0.08992833 = score(doc=5841,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.32531026 = fieldWeight in 5841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5841)
      0.25 = coord(1/4)
    
    Abstract
    Purpose To provide a close, detailed analysis of the frequency, nature, and depth of visible use of Michel Foucault's works by library and information science/studies (LIS) scholars. Design/methodology/approach The study conducted extensive full-text searches in a large number of electronically available LIS journal databases to find citations of Foucault's works, then examined each cited article to evaluate the nature and depth of use. Findings Most uses of Foucault are brief or in passing. In-depth explorations of Foucault's works are comparatively rare and relatively little-used by other LIS scholars. Yet the relatively brief uses of Foucault encompass a wide array of different topics spread across a wide spectrum of LIS journal literature. Research limitations/implications The study was limited to articles from particular relatively prominent LIS journals. Results might vary if different journals or non-journal literature were studied. More sophisticated bibliometric techniques might reveal different relative performance among journals and might better test, confirm, or reject various patterns and relationships found here. Other research approaches, such as discourse analysis, social network analysis, or scholar interviews, might reveal patterns of use and influence not visible in this literature sample. Originality/value This intensive study of both quality and quantity of citations may challenge some existing assumptions regarding citation analysis, plus illuminating Foucault scholarship. It also indicates possible problems for future application of artificial intelligence (AI) approaches to similar depth-of-use studies.
  7. Dumais, S.T.: Latent semantic analysis (2003) 0.02
    0.019435233 = product of:
      0.038870465 = sum of:
        0.0049940604 = product of:
          0.019976242 = sum of:
            0.019976242 = weight(_text_:based in 2462) [ClassicSimilarity], result of:
              0.019976242 = score(doc=2462,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.14123408 = fieldWeight in 2462, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2462)
          0.25 = coord(1/4)
        0.033876404 = weight(_text_:term in 2462) [ClassicSimilarity], result of:
          0.033876404 = score(doc=2462,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.1546585 = fieldWeight in 2462, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
      0.5 = coord(2/4)
    
    Abstract
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.
  8. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.02
    0.017428853 = product of:
      0.034857705 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 7415) [ClassicSimilarity], result of:
              0.037667565 = score(doc=7415,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.25 = coord(1/4)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.05088163 = score(doc=7415,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  9. Yu, N.: Readings & Web resources for faceted classification 0.02
    0.016938202 = product of:
      0.06775281 = sum of:
        0.06775281 = weight(_text_:term in 4394) [ClassicSimilarity], result of:
          0.06775281 = score(doc=4394,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.309317 = fieldWeight in 4394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4394)
      0.25 = coord(1/4)
    
    Abstract
    The term "facet" has been used in various places, while in most cases it is just a buzz word to replace what is indeed "aspect" or "category". The references below either define and explain the original concept of facet or provide guidelines for building 'real' faceted search/browse. I was interested in faceted classification because it seems to be a natural and efficient way for organizing and browsing Web collections. However, to automatically generate facets and their isolates is extremely difficult since it involves concept extraction and concept grouping, both of which are difficult problems by themselves. And it is almost impossible to achieve mutually exclusive and jointly exhaustive 'true' facets without human judgment. Nowadays, faceted search/browse widely exists, implicitly or explicitly, on a majority of retail websites due to the multi-aspects nature of the data. However, it is still rarely seen on any digital library sites. (I could be wrong since I haven't kept myself updated with this field for a while.)
  10. Galloway, P.: Preservation of digital objects (2003) 0.01
    0.014115169 = product of:
      0.056460675 = sum of:
        0.056460675 = weight(_text_:term in 4275) [ClassicSimilarity], result of:
          0.056460675 = score(doc=4275,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 4275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
      0.25 = coord(1/4)
    
    Abstract
    The preservation of digital objects (defined here as objects in digital form that require a computer to support their existence and display) is obviously an important practical issue for the information professions, with its importance growing daily as more information objects are produced in, or converted to, digital form. Yakel's (2001) review of the field provided a much-needed introduction. At the same time, the complexity of new digital objects continues to increase, challenging existing preservation efforts (Lee, Skattery, Lu, Tang, & McCrary, 2002). The field of information science itself is beginning to pay some reflexive attention to the creation of fragile and unpreservable digital objects. But these concerns focus often an the practical problems of short-term repurposing of digital objects rather than actual preservation, by which I mean the activity of carrying digital objects from one software generation to another, undertaken for purposes beyond the original reasons for creating the objects. For preservation in this sense to be possible, information science as a discipline needs to be active in the formulation of, and advocacy for, national information policies. Such policies will need to challenge the predominant cultural expectation of planned obsolescence for information resources, and cultural artifacts in general.
  11. Hjoerland, B.: Semantics and knowledge organization (2007) 0.01
    0.014115169 = product of:
      0.056460675 = sum of:
        0.056460675 = weight(_text_:term in 1980) [ClassicSimilarity], result of:
          0.056460675 = score(doc=1980,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 1980, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1980)
      0.25 = coord(1/4)
    
    Abstract
    The aim of this chapter is to demonstrate that semantic issues underlie all research questions within Library and Information Science (LIS, or, as hereafter, IS) and, in particular, the subfield known as Knowledge Organization (KO). Further, it seeks to show that semantics is a field influenced by conflicting views and discusses why it is important to argue for the most fruitful one of these. Moreover, the chapter demonstrates that IS has not yet addressed semantic problems in systematic fashion and examines why the field is very fragmented and without a proper theoretical basis. The focus here is on broad interdisciplinary issues and the long-term perspective. The theoretical problems involving semantics and concepts are very complicated. Therefore, this chapter starts by considering tools developed in KO for information retrieval (IR) as basically semantic tools. In this way, it establishes a specific IS focus on the relation between KO and semantics. It is well known that thesauri consist of a selection of concepts supplemented with information about their semantic relations (such as generic relations or "associative relations"). Some words in thesauri are "preferred terms" (descriptors), whereas others are "lead-in terms." The descriptors represent concepts. The difference between "a word" and "a concept" is that different words may have the same meaning and similar words may have different meanings, whereas one concept expresses one meaning.
  12. Terrill, L.J.: ¬The state of cataloging research : an analysis of peer-reviewed journal literature, 2010-2014 (2016) 0.01
    0.013833362 = product of:
      0.055333447 = sum of:
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 5137) [ClassicSimilarity], result of:
              0.11066689 = score(doc=5137,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 5137, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5137)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The importance of cataloging research was highlighted by a resolution declaring 2010 as "The Year of Cataloging Research." This study of the peer-reviewed journal literature from 2010 to 2014 examined the state of cataloging literature since this proclamation. The goals were to determine the percentage of cataloging literature that can be classified as research, what research methods were used, and whether the articles contributed to the library assessment conversation. Nearly a quarter of the cataloging literature qualifies as research; however, a majority of researchers fail to make explicit connections between their work and the missions of their libraries.
  13. Enser, P.G.B.: Visual image retrieval (2008) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.10176326 = score(doc=3281,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2012 13:01:26
  14. Morris, S.A.: Mapping research specialties (2008) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.10176326 = score(doc=3962,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 9:30:22
  15. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.10176326 = score(doc=4368,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:22:28
  16. Nicolaisen, J.: Citation analysis (2007) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.10176326 = score(doc=6091,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  17. Metz, A.: Community service : a bibliography (1996) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
              0.10176326 = score(doc=5341,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 5341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5341)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    17.10.1996 14:22:33
  18. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.10176326 = score(doc=334,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  19. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
              0.10176326 = score(doc=335,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=335)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  20. Warner, A.J.: Natural language processing (1987) 0.01
    0.012720408 = product of:
      0.05088163 = sum of:
        0.05088163 = product of:
          0.10176326 = sum of:
            0.10176326 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10176326 = score(doc=337,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108

Years

Languages

  • e 58
  • m 1
  • More… Less…

Types

  • a 54
  • b 12
  • el 2
  • m 1
  • r 1
  • More… Less…