Search (73 results, page 1 of 4)

  • × theme_ss:"Literaturübersicht"
  1. Warner, A.J.: Natural language processing (1987) 0.06
    0.059291773 = product of:
      0.23716709 = sum of:
        0.23716709 = sum of:
          0.13565561 = weight(_text_:processing in 337) [ClassicSimilarity], result of:
            0.13565561 = score(doc=337,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.7156181 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
          0.101511486 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
            0.101511486 = score(doc=337,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.61904186 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  2. Rasmussen, E.M.: Parallel information processing (1992) 0.06
    0.058608912 = product of:
      0.117217824 = sum of:
        0.04138403 = weight(_text_:data in 345) [ClassicSimilarity], result of:
          0.04138403 = score(doc=345,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=345)
        0.0758338 = product of:
          0.1516676 = sum of:
            0.1516676 = weight(_text_:processing in 345) [ClassicSimilarity], result of:
              0.1516676 = score(doc=345,freq=10.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.80008537 = fieldWeight in 345, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=345)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Focuses on the application of parallel processing for the processing of text, primarily documents and document surrogates. Research on parallel processing of text has developed in 2 areas: a hardware approach involving the development of special purpose machines for text processing; and a software approach in which data structures and algorithms are developed for text searching using general purpose parallel processors
  3. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.05
    0.050605834 = product of:
      0.20242333 = sum of:
        0.20242333 = sum of:
          0.1516676 = weight(_text_:processing in 7415) [ClassicSimilarity], result of:
            0.1516676 = score(doc=7415,freq=10.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.80008537 = fieldWeight in 7415, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
          0.050755743 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
            0.050755743 = score(doc=7415,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.30952093 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  4. Braman, S.: Policy for the net and the Internet (1995) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 4544) [ClassicSimilarity], result of:
          0.043894395 = score(doc=4544,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 4544, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4544)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 4544) [ClassicSimilarity], result of:
              0.05087085 = score(doc=4544,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 4544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4544)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    State of the art review of the Net (the global telecommunications network as a whole) and the Internet with particular reference to the development of a coherent policy for those uisng these telecommunications facilities. Policy issues discussed include: standards, intellectual property; encryption, rules for transborder data flow; and data privacy. Considers their implications for individuals as well as government and commercial institutions. The review is limited to English language publications and explores specific issues that affect the structure of government, the economy and society, as well as those involved in the design of the net and looks at comparative and international issues. Concludes that the development of policies for the net is made difficult by the many different bodies of law that apply, by the fact that the relevant technologies are new and changing because that technologies are new and rapidly changing and because the net is global. Specific characteristics of the net require new thinking on a constitutional level, since information creation, processing, flows and use are constitutive forces in society
  5. Genereux, C.: Building connections : a review of the serials literature 2004 through 2005 (2007) 0.03
    0.0314639 = product of:
      0.0629278 = sum of:
        0.043894395 = weight(_text_:data in 2548) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2548,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2548, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2548)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 2548) [ClassicSimilarity], result of:
              0.038066804 = score(doc=2548,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 2548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This review of 2004 and 2005 serials literature covers the themes of cost, management, and access. Interwoven through the serials literature of these two years are the importance of collaboration, communication, and linkages between scholars, publishers, subscription agents and other intermediaries, and librarians. The emphasis in the literature is on electronic serials and their impact on publishing, libraries, and vendors. In response to the crisis of escalating journal prices and libraries' dissatisfaction with the Big Deal licensing agreements, Open Access journals and publishing models were promoted. Libraries subscribed to or licensed increasing numbers of electronic serials. As a result, libraries sought ways to better manage licensing and subscription data (not handled by traditional integrated library systems) by implementing electronic resources management systems. In order to provide users with better, faster, and more current information on and access to electronic serials, libraries implemented tools and services to provide A-Z title lists, title by title coverage data, MARC records, and OpenURL link resolvers.
    Date
    10. 9.2000 17:38:22
  6. Candela, L.; Castelli, D.; Manghi, P.; Tani, A.: Data journals : a survey (2015) 0.03
    0.027977297 = product of:
      0.11190919 = sum of:
        0.11190919 = weight(_text_:data in 2156) [ClassicSimilarity], result of:
          0.11190919 = score(doc=2156,freq=26.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.75578237 = fieldWeight in 2156, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2156)
      0.25 = coord(1/4)
    
    Abstract
    Data occupy a key role in our information society. However, although the amount of published data continues to grow and terms such as data deluge and big data today characterize numerous (research) initiatives, much work is still needed in the direction of publishing data in order to make them effectively discoverable, available, and reusable by others. Several barriers hinder data publishing, from lack of attribution and rewards, vague citation practices, and quality issues to a rather general lack of a data-sharing culture. Lately, data journals have overcome some of these barriers. In this study of more than 100 currently existing data journals, we describe the approaches they promote for data set description, availability, citation, quality, and open access. We close by identifying ways to expand and strengthen the data journals approach as a means to promote data set access and exploitation.
  7. Trybula, W.J.: Data mining and knowledge discovery (1997) 0.03
    0.025605064 = product of:
      0.102420256 = sum of:
        0.102420256 = weight(_text_:data in 2300) [ClassicSimilarity], result of:
          0.102420256 = score(doc=2300,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.69169855 = fieldWeight in 2300, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2300)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of the recently developed concepts of data mining (defined as the automated process of evaluating data and finding relationships) and knowledge discovery (defined as the automated process of extracting information, especially unpredicted relationships or previously unknown patterns among the data) with particular reference to numerical data. Includes: the knowledge acquisition process; data mining; evaluation methods; and knowledge discovery. Concludes that existing work in the field are confusing because the terminology is inconsistent and poorly defined. Although methods are available for analyzing and cleaning databases, better coordinated efforts should be directed toward providing users with improved means of structuring search mechanisms to explore the data for relationships
    Theme
    Data Mining
  8. Martin, K.E.; Mundle, K.: Positioning libraries for a new bibliographic universe (2014) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 2608) [ClassicSimilarity], result of:
          0.031038022 = score(doc=2608,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 2608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2608)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 2608) [ClassicSimilarity], result of:
              0.038066804 = score(doc=2608,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 2608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2608)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper surveys the English-language literature on cataloging and classification published during 2011 and 2012, covering both theory and application. A major theme of the literature centered on Resource Description and Access (RDA), as the period covered in this review includes the conclusion of the RDA test, revisions to RDA, and the implementation decision. Explorations in the theory and practical applications of the Functional Requirements for Bibliographic Records (FRBR), upon which RDA is organized, are also heavily represented. Library involvement with linked data through the creation of prototypes and vocabularies are explored further during the period. Other areas covered in the review include: classification, controlled vocabularies and name authority, evaluation and history of cataloging, special formats cataloging, cataloging and discovery services, non-AACR2/RDA metadata, cataloging workflows, and the education and careers of catalogers.
    Date
    10. 9.2000 17:38:22
  9. Benoit, G.: Data mining (2002) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 4296) [ClassicSimilarity], result of:
          0.08778879 = score(doc=4296,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 4296, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
      0.25 = coord(1/4)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Theme
    Data Mining
  10. Bath, P.A.: Data mining in health and medical information (2003) 0.02
    0.020692015 = product of:
      0.08276806 = sum of:
        0.08276806 = weight(_text_:data in 4263) [ClassicSimilarity], result of:
          0.08276806 = score(doc=4263,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5589768 = fieldWeight in 4263, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4263)
      0.25 = coord(1/4)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Theme
    Data Mining
  11. Mostafa, J.: Digital image representation and access (1994) 0.02
    0.02024258 = product of:
      0.08097032 = sum of:
        0.08097032 = weight(_text_:data in 1102) [ClassicSimilarity], result of:
          0.08097032 = score(doc=1102,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5468357 = fieldWeight in 1102, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1102)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of techniques used to generate, store and retrieval digital images. Explains basic terms and concepts related to image representation and describes the differences between bilevel, greyscale, and colour images. Introduces additional image related data, specifically colour standards, correction values, resolution parameters and lookup tables. Illustrates the use of data compression techniques and various image data formats that have been used. Identifies 4 branches of imaging research related to dtaa indexing and modelling: verbal indexing; visual surrogates; image indexing; and data structures. Concludes with a discussion of the state of the art in networking technology with consideration of image distribution, local system requirements and data integrity
  12. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.01828933 = product of:
      0.07315732 = sum of:
        0.07315732 = weight(_text_:data in 4279) [ClassicSimilarity], result of:
          0.07315732 = score(doc=4279,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.49407038 = fieldWeight in 4279, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  13. Blake, C.: Text mining (2011) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 1599) [ClassicSimilarity], result of:
          0.07242205 = score(doc=1599,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 1599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=1599)
      0.25 = coord(1/4)
    
    Theme
    Data Mining
  14. Willett, P.: Recent trends in hierarchic document clustering : a critical review (1988) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 2604) [ClassicSimilarity], result of:
              0.13565561 = score(doc=2604,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 2604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=2604)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 24(1988) no.5, S.577-597
  15. Zunde, P.: Selected bibliography on information theory applications to information science and related subject areas (1984) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 4115) [ClassicSimilarity], result of:
              0.13565561 = score(doc=4115,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 4115, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=4115)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 20(1984), S.417-497
  16. Simmons, R.F.: Automated language processing (1966) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 220) [ClassicSimilarity], result of:
              0.13565561 = score(doc=220,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=220)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  17. Bobrow, D.G.; Fraser, J.B.; Quillian, M.R.: Automated language processing (1967) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 228) [ClassicSimilarity], result of:
              0.13565561 = score(doc=228,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=228)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  18. Salton, G.: Automated language processing (1968) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 233) [ClassicSimilarity], result of:
              0.13565561 = score(doc=233,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 233, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=233)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  19. Montgomery, C.A.: Automated language processing (1969) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 240) [ClassicSimilarity], result of:
              0.13565561 = score(doc=240,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=240)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  20. Kay, M.; Sparck Jones, K.: Automated language processing (1971) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 250) [ClassicSimilarity], result of:
              0.13565561 = score(doc=250,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    

Languages

  • e 71
  • d 2
  • More… Less…

Types

  • a 71
  • b 9
  • el 1
  • r 1
  • More… Less…