Search (135 results, page 7 of 7)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  1. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 1312) [ClassicSimilarity], result of:
              0.015674612 = score(doc=1312,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 1312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 1338) [ClassicSimilarity], result of:
              0.015674612 = score(doc=1338,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 1338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 2097) [ClassicSimilarity], result of:
              0.015674612 = score(doc=2097,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 2097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2097)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Luo, Z.; Yu, Y.; Osborne, M.; Wang, T.: Structuring tweets for improving Twitter search (2015) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 2335) [ClassicSimilarity], result of:
              0.015674612 = score(doc=2335,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 2335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Brychcín, T.; Konopík, M.: HPS: High precision stemmer (2015) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 2686) [ClassicSimilarity], result of:
              0.015674612 = score(doc=2686,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 3221) [ClassicSimilarity], result of:
              0.015674612 = score(doc=3221,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 3221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Järvelin, A.; Keskustalo, H.; Sormunen, E.; Saastamoinen, M.; Kettunen, K.: Information retrieval from historical newspaper collections in highly inflectional languages : a query expansion approach (2016) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 3223) [ClassicSimilarity], result of:
              0.015674612 = score(doc=3223,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 3223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 3682) [ClassicSimilarity], result of:
              0.015674612 = score(doc=3682,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 5043) [ClassicSimilarity], result of:
              0.015674612 = score(doc=5043,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 5043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5043)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Suissa, O.; Elmalech, A.; Zhitomirsky-Geffet, M.: Text analysis using deep neural networks in digital humanities and information science (2022) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 491) [ClassicSimilarity], result of:
              0.015674612 = score(doc=491,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 491, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=491)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Park, J.S.; O'Brien, J.C.; Cai, C.J.; Ringel Morris, M.; Liang, P.; Bernstein, M.S.: Generative agents : interactive simulacra of human behavior (2023) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 972) [ClassicSimilarity], result of:
              0.015674612 = score(doc=972,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Thiel, M.: Bedingt wahrscheinliche Syntaxbäume (2006) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 6069) [ClassicSimilarity], result of:
              0.012539689 = score(doc=6069,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 6069, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 5044) [ClassicSimilarity], result of:
              0.012539689 = score(doc=5044,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 5044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Azpiazu, I.M.; Soledad Pera, M.: Is cross-lingual readability assessment possible? (2020) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 5868) [ClassicSimilarity], result of:
              0.012539689 = score(doc=5868,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 5868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.00
    0.002743057 = product of:
      0.005486114 = sum of:
        0.005486114 = product of:
          0.010972228 = sum of:
            0.010972228 = weight(_text_:m in 3645) [ClassicSimilarity], result of:
              0.010972228 = score(doc=3645,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.09622806 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The selection that follows was chosen as it represents "a very early paper an the possibilities allowed by computers an documentation." In the early 1960s computers were being used to provide simple automatic indexing systems wherein keywords were extracted from documents. The problem with such systems was that they lacked vocabulary control, thus documents related in subject matter were not always collocated in retrieval. To improve retrieval by improving recall is the raison d'être of vocabulary control tools such as classifications and thesauri. The question arose whether it was possible by automatic means to construct classes of terms, which when substituted, one for another, could be used to improve retrieval performance? One of the first theoretical approaches to this question was initiated by R. M. Needham and Karen Sparck Jones at the Cambridge Language Research Institute in England.t The question was later pursued using experimental methodologies by Sparck Jones, who, as a Senior Research Associate in the Computer Laboratory at the University of Cambridge, has devoted her life's work to research in information retrieval and automatic naturai language processing. Based an the principles of numerical taxonomy, automatic classification techniques start from the premise that two objects are similar to the degree that they share attributes in common. When these two objects are keywords, their similarity is measured in terms of the number of documents they index in common. Step 1 in automatic classification is to compute mathematically the degree to which two terms are similar. Step 2 is to group together those terms that are "most similar" to each other, forming equivalence classes of intersubstitutable terms. The technique for forming such classes varies and is the factor that characteristically distinguishes different approaches to automatic classification. The technique used by Needham and Sparck Jones, that of clumping, is described in the selection that follows. Questions that must be asked are whether the use of automatically generated classes really does improve retrieval performance and whether there is a true eco nomic advantage in substituting mechanical for manual labor. Several years after her work with clumping, Sparck Jones was to observe that while it was not wholly satisfactory in itself, it was valuable in that it stimulated research into automatic classification. To this it might be added that it was valuable in that it introduced to libraryl information science the methods of numerical taxonomy, thus stimulating us to think again about the fundamental nature and purpose of classification. In this connection it might be useful to review how automatically derived classes differ from those of manually constructed classifications: 1) the manner of their derivation is purely a posteriori, the ultimate operationalization of the principle of literary warrant; 2) the relationship between members forming such classes is essentially statistical; the members of a given class are similar to each other not because they possess the class-defining characteristic but by virtue of sharing a family resemblance; and finally, 3) automatically derived classes are not related meaningfully one to another, that is, they are not ordered in traditional hierarchical and precedence relationships.

Years

Languages

  • e 104
  • d 30
  • ru 1
  • More… Less…

Types