Search (711 results, page 1 of 36)

  • × theme_ss:"Computerlinguistik"
  1. Chibout, K.; Vilnat, A.: Primitive sémantiques, classification des verbes et polysémie (1999) 0.21
    0.2060942 = product of:
      0.27479228 = sum of:
        0.009158926 = weight(_text_:a in 6229) [ClassicSimilarity], result of:
          0.009158926 = score(doc=6229,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.18016359 = fieldWeight in 6229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6229)
        0.21447787 = weight(_text_:et in 6229) [ClassicSimilarity], result of:
          0.21447787 = score(doc=6229,freq=8.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            1.0367965 = fieldWeight in 6229, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.078125 = fieldNorm(doc=6229)
        0.051155485 = product of:
          0.10231097 = sum of:
            0.10231097 = weight(_text_:al in 6229) [ClassicSimilarity], result of:
              0.10231097 = score(doc=6229,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.5063471 = fieldWeight in 6229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Series
    Collection travaux et recherches; UL3
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  2. Warner, A.J.: ¬The role of linguistic analysis in full-text retrieval (1994) 0.17
    0.1731143 = product of:
      0.23081906 = sum of:
        0.009066874 = weight(_text_:a in 2992) [ClassicSimilarity], result of:
          0.009066874 = score(doc=2992,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17835285 = fieldWeight in 2992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2992)
        0.1501345 = weight(_text_:et in 2992) [ClassicSimilarity], result of:
          0.1501345 = score(doc=2992,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.7257575 = fieldWeight in 2992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.109375 = fieldNorm(doc=2992)
        0.07161768 = product of:
          0.14323536 = sum of:
            0.14323536 = weight(_text_:al in 2992) [ClassicSimilarity], result of:
              0.14323536 = score(doc=2992,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.70888597 = fieldWeight in 2992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2992)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Challenges in indexing electronic text and images. Ed.: R. Fidel et al
    Type
    a
  3. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.17
    0.1731143 = product of:
      0.23081906 = sum of:
        0.009066874 = weight(_text_:a in 5270) [ClassicSimilarity], result of:
          0.009066874 = score(doc=5270,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17835285 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
        0.1501345 = weight(_text_:et in 5270) [ClassicSimilarity], result of:
          0.1501345 = score(doc=5270,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.7257575 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
        0.07161768 = product of:
          0.14323536 = sum of:
            0.14323536 = weight(_text_:al in 5270) [ClassicSimilarity], result of:
              0.14323536 = score(doc=5270,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.70888597 = fieldWeight in 5270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5270)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Text analysis and computer. Ed.: C. Züll et al
    Type
    a
  4. Vazov, N.: Identification des differentes structures temporelles dans des textes et leur rôles dans le raisonnement temporel (1999) 0.16
    0.16326582 = product of:
      0.21768776 = sum of:
        0.0051810704 = weight(_text_:a in 6203) [ClassicSimilarity], result of:
          0.0051810704 = score(doc=6203,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.10191591 = fieldWeight in 6203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6203)
        0.1715823 = weight(_text_:et in 6203) [ClassicSimilarity], result of:
          0.1715823 = score(doc=6203,freq=8.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.82943714 = fieldWeight in 6203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6203)
        0.040924385 = product of:
          0.08184877 = sum of:
            0.08184877 = weight(_text_:al in 6203) [ClassicSimilarity], result of:
              0.08184877 = score(doc=6203,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.4050777 = fieldWeight in 6203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6203)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Series
    Collection travaux et recherches; UL3
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  5. Ferret, O.; Grau, B.; Masson, N.: Utilisation d'un réseau de cooccurences lexikales pour a méliorer une analyse thématique fondée sur la distribution des mots (1999) 0.15
    0.14991087 = product of:
      0.19988115 = sum of:
        0.010362141 = weight(_text_:a in 6295) [ClassicSimilarity], result of:
          0.010362141 = score(doc=6295,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.20383182 = fieldWeight in 6295, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6295)
        0.14859463 = weight(_text_:et in 6295) [ClassicSimilarity], result of:
          0.14859463 = score(doc=6295,freq=6.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.7183137 = fieldWeight in 6295, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6295)
        0.040924385 = product of:
          0.08184877 = sum of:
            0.08184877 = weight(_text_:al in 6295) [ClassicSimilarity], result of:
              0.08184877 = score(doc=6295,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.4050777 = fieldWeight in 6295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6295)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Übers. d. Titels: Use of a network of lexical co-occurences to improve a thematic analysis based on distribution of words
    Series
    Collection travaux et recherches; UL3
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  6. Engerer, V.: Indexierungstheorie für Linguisten : zu einigen natürlichsprachlichen Zügen in künstlichen Indexsprachen (2014) 0.15
    0.14838368 = product of:
      0.19784491 = sum of:
        0.007771606 = weight(_text_:a in 3339) [ClassicSimilarity], result of:
          0.007771606 = score(doc=3339,freq=2.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=3339)
        0.12868671 = weight(_text_:et in 3339) [ClassicSimilarity], result of:
          0.12868671 = score(doc=3339,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.6220778 = fieldWeight in 3339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.09375 = fieldNorm(doc=3339)
        0.06138658 = product of:
          0.12277316 = sum of:
            0.12277316 = weight(_text_:al in 3339) [ClassicSimilarity], result of:
              0.12277316 = score(doc=3339,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.60761654 = fieldWeight in 3339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3339)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Dialekte, Konzepte, Kontakte. Ergebnisse des Arbeitstreffens der Gesellschaft für Sprache und Sprachen, GeSuS e.V., 31. Mai - 1. Juni 2013 in Freiburg/Breisgau. Hrsg.: V. Schönenberger et al
    Type
    a
  7. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.15
    0.14618276 = product of:
      0.19491035 = sum of:
        0.0064112484 = weight(_text_:a in 2345) [ClassicSimilarity], result of:
          0.0064112484 = score(doc=2345,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.12611452 = fieldWeight in 2345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.07506725 = weight(_text_:et in 2345) [ClassicSimilarity], result of:
          0.07506725 = score(doc=2345,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.36287874 = fieldWeight in 2345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.113431856 = sum of:
          0.07161768 = weight(_text_:al in 2345) [ClassicSimilarity], result of:
            0.07161768 = score(doc=2345,freq=2.0), product of:
              0.20205697 = queryWeight, product of:
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.044089027 = queryNorm
              0.35444298 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.041814182 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.041814182 = score(doc=2345,freq=2.0), product of:
              0.15439226 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044089027 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.75 = coord(3/4)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
    Type
    a
  8. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.13
    0.1270067 = product of:
      0.16934226 = sum of:
        0.007771606 = weight(_text_:a in 75) [ClassicSimilarity], result of:
          0.007771606 = score(doc=75,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 75, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.064343356 = weight(_text_:et in 75) [ClassicSimilarity], result of:
          0.064343356 = score(doc=75,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.097227305 = sum of:
          0.06138658 = weight(_text_:al in 75) [ClassicSimilarity], result of:
            0.06138658 = score(doc=75,freq=2.0), product of:
              0.20205697 = queryWeight, product of:
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.044089027 = queryNorm
              0.30380827 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
          0.035840724 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
            0.035840724 = score(doc=75,freq=2.0), product of:
              0.15439226 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044089027 = queryNorm
              0.23214069 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
      0.75 = coord(3/4)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
    Type
    a
  9. Ghenima, M.: ¬A system of 'computer-aided diacritisation' using a lexical database of Arabic language (1998) 0.10
    0.101767056 = product of:
      0.13568941 = sum of:
        0.008973878 = weight(_text_:a in 74) [ClassicSimilarity], result of:
          0.008973878 = score(doc=74,freq=6.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17652355 = fieldWeight in 74, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=74)
        0.08579115 = weight(_text_:et in 74) [ClassicSimilarity], result of:
          0.08579115 = score(doc=74,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.41471857 = fieldWeight in 74, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=74)
        0.040924385 = product of:
          0.08184877 = sum of:
            0.08184877 = weight(_text_:al in 74) [ClassicSimilarity], result of:
              0.08184877 = score(doc=74,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.4050777 = fieldWeight in 74, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=74)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
    Type
    a
  10. Wordhoard (o.J.) 0.09
    0.09215283 = product of:
      0.12287044 = sum of:
        0.011994347 = weight(_text_:a in 3922) [ClassicSimilarity], result of:
          0.011994347 = score(doc=3922,freq=14.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.23593865 = fieldWeight in 3922, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
        0.07506725 = weight(_text_:et in 3922) [ClassicSimilarity], result of:
          0.07506725 = score(doc=3922,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.36287874 = fieldWeight in 3922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
        0.03580884 = product of:
          0.07161768 = sum of:
            0.07161768 = weight(_text_:al in 3922) [ClassicSimilarity], result of:
              0.07161768 = score(doc=3922,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.35444298 = fieldWeight in 3922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3922)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
    Type
    a
  11. WordHoard: finding multiword units (20??) 0.09
    0.09215283 = product of:
      0.12287044 = sum of:
        0.011994347 = weight(_text_:a in 1123) [ClassicSimilarity], result of:
          0.011994347 = score(doc=1123,freq=14.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.23593865 = fieldWeight in 1123, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1123)
        0.07506725 = weight(_text_:et in 1123) [ClassicSimilarity], result of:
          0.07506725 = score(doc=1123,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.36287874 = fieldWeight in 1123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1123)
        0.03580884 = product of:
          0.07161768 = sum of:
            0.07161768 = weight(_text_:al in 1123) [ClassicSimilarity], result of:
              0.07161768 = score(doc=1123,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.35444298 = fieldWeight in 1123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1123)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
    Type
    a
  12. Green, R.: Automated identification of frame semantic relational structures (2000) 0.09
    0.08995722 = product of:
      0.11994296 = sum of:
        0.009066874 = weight(_text_:a in 110) [ClassicSimilarity], result of:
          0.009066874 = score(doc=110,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.17835285 = fieldWeight in 110, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=110)
        0.07506725 = weight(_text_:et in 110) [ClassicSimilarity], result of:
          0.07506725 = score(doc=110,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.36287874 = fieldWeight in 110, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=110)
        0.03580884 = product of:
          0.07161768 = sum of:
            0.07161768 = weight(_text_:al in 110) [ClassicSimilarity], result of:
              0.07161768 = score(doc=110,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.35444298 = fieldWeight in 110, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=110)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Preliminary attempts to identify semantic frames and their internal structure automatically have met with a degree of success. In a first stage, clustering is used to detect 4 previously identified semantic frames (COMMERCIAL TRANSACTION, HIT, JUDGING, RISK) from verb definitions in Longman's Dictionary of Contemporary English. In a second stage, nouns used in the definitions of frame-invoking verbs or in whose definitions the frame-invoking verbs occur in certain forms are searched in WordNet to identify frame elements. Suggestions for refinement of the processes are discussed
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
    Type
    a
  13. Ingenerf, J.: Disambiguating lexical meaning : conceptual meta-modelling as a means of controlling semantic language analysis (1994) 0.08
    0.08002055 = product of:
      0.10669406 = sum of:
        0.011657409 = weight(_text_:a in 2572) [ClassicSimilarity], result of:
          0.011657409 = score(doc=2572,freq=18.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.22931081 = fieldWeight in 2572, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2572)
        0.064343356 = weight(_text_:et in 2572) [ClassicSimilarity], result of:
          0.064343356 = score(doc=2572,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 2572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2572)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 2572) [ClassicSimilarity], result of:
              0.06138658 = score(doc=2572,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 2572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2572)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    A formal terminology consists of a set of conceptual definitions for the semantical reconstruction of a vocabulary on an intensional level of description. The marking of comparatively abstract concepts as semantic categories and their relational positioning on a meta-level is shown to be instrumental in adapting the conceptual design to domain-specific characteristics. Such a meta-model implies that concepts subsumed by categories may share their compositional possibilities as regards the construction of complex structures. Our approach to language processing leads to an automatic derivation of contextual semantic information about the linguistic expressions under review. This information is encoded by means of values of certain attributes defined in a feature-based grammatical framework. A standard process controlling grammatical analysis, the unification of feature structures, is used for its evaluation. One important example for the usefulness of this approach is the disamgiguation of lexical meaning
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
    Type
    a
  14. He, Q.: ¬A study of the strength indexes in co-word analysis (2000) 0.08
    0.078988135 = product of:
      0.10531752 = sum of:
        0.010280869 = weight(_text_:a in 111) [ClassicSimilarity], result of:
          0.010280869 = score(doc=111,freq=14.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.20223314 = fieldWeight in 111, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=111)
        0.064343356 = weight(_text_:et in 111) [ClassicSimilarity], result of:
          0.064343356 = score(doc=111,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=111)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 111) [ClassicSimilarity], result of:
              0.06138658 = score(doc=111,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Co-word analysis is a technique for detecting the knowledge structure of scientific literature and mapping the dynamics in a research field. It is used to count the co-occurrences of term pairs, compute the strength between term pairs, and map the research field by inserting terms and their linkages into a graphical structure according to the strength values. In previous co-word studies, there are two indexes used to measure the strength between term pairs in order to identify the major areas in a research field - the inclusion index (I) and the equivalence index (E). This study will conduct two co-word analysis experiments using the two indexes, respectively, and compare the results from the two experiments. The results show, due to the difference in their computation, index I is more likely to identify general subject areas in a research field while index E is more likely to identify subject areas at more specific levels
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
    Type
    a
  15. Mustafa el Hadi, W.: Dynamics of the linguistic paradigm in information retrieval (2000) 0.08
    0.078988135 = product of:
      0.10531752 = sum of:
        0.010280869 = weight(_text_:a in 151) [ClassicSimilarity], result of:
          0.010280869 = score(doc=151,freq=14.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.20223314 = fieldWeight in 151, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=151)
        0.064343356 = weight(_text_:et in 151) [ClassicSimilarity], result of:
          0.064343356 = score(doc=151,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=151)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 151) [ClassicSimilarity], result of:
              0.06138658 = score(doc=151,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=151)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this paper we briefly sketch the dynamics of the linguistic paradigm in Information Retrieval (IR) and its adaptation to the Internet. The emergence of Natural Language Processing (NLP) techniques has been a major factor leading to this adaptation. These techniques and tools try to adapt to the current needs, i.e. retrieving information from documents written and indexed in a foreign language by using a native language query to express the information need. This process, known as cross-language IR (CLIR), is a field at the cross roads of both Machine Translation and IR. This field represents a real challenge to the IR community and will require a solid cooperation with the NLP community.
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
    Type
    a
  16. Zadeh, B.Q.; Handschuh, S.: ¬The ACL RD-TEC : a dataset for benchmarking terminology extraction and classification in computational linguistics (2014) 0.08
    0.07779418 = product of:
      0.10372557 = sum of:
        0.00868892 = weight(_text_:a in 2803) [ClassicSimilarity], result of:
          0.00868892 = score(doc=2803,freq=10.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.1709182 = fieldWeight in 2803, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2803)
        0.064343356 = weight(_text_:et in 2803) [ClassicSimilarity], result of:
          0.064343356 = score(doc=2803,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 2803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2803)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 2803) [ClassicSimilarity], result of:
              0.06138658 = score(doc=2803,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 2803, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2803)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper introduces ACL RD-TEC: a dataset for evaluating the extraction and classification of terms from literature in the domain of computational linguistics. The dataset is derived from the Association for Computational Linguistics anthology reference corpus (ACL ARC). In its first release, the ACL RD-TEC consists of automatically segmented, part-of-speech-tagged ACL ARC documents, three lists of candidate terms, and more than 82,000 manually annotated terms. The annotated terms are marked as either valid or invalid, and valid terms are further classified as technology and non-technology terms. Technology terms signify methods, algorithms, and solutions in computational linguistics. The paper describes the dataset and reports the relevant statistics. We hope the step described in this paper encourages a collaborative effort towards building a full-fledged annotated corpus from the computational linguistics literature.
    Source
    Proceedings of the 4th International Workshop on Computational Terminology, Dublin, Ireland, August 23 2014. COLING 2014. Eds.: Patrick Drouin et al., Dublin, Ireland, 2014-08-23 [https://www.deri.ie/sites/default/files/publications/the-acl-rd-tec.pdf]
    Type
    a
  17. Korman, D.Z.; Mack, E.; Jett, J.; Renear, A.H.: Defining textual entailment (2018) 0.08
    0.07779418 = product of:
      0.10372557 = sum of:
        0.00868892 = weight(_text_:a in 4284) [ClassicSimilarity], result of:
          0.00868892 = score(doc=4284,freq=10.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.1709182 = fieldWeight in 4284, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.064343356 = weight(_text_:et in 4284) [ClassicSimilarity], result of:
          0.064343356 = score(doc=4284,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 4284) [ClassicSimilarity], result of:
              0.06138658 = score(doc=4284,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Textual entailment is a relationship that obtains between fragments of text when one fragment in some sense implies the other fragment. The automation of textual entailment recognition supports a wide variety of text-based tasks, including information retrieval, information extraction, question answering, text summarization, and machine translation. Much ingenuity has been devoted to developing algorithms for identifying textual entailments, but relatively little to saying what textual entailment actually is. This article is a review of the logical and philosophical issues involved in providing an adequate definition of textual entailment. We show that many natural definitions of textual entailment are refuted by counterexamples, including the most widely cited definition of Dagan et al. We then articulate and defend the following revised definition: T textually entails H?=?df typically, a human reading T would be justified in inferring the proposition expressed by H from the proposition expressed by T. We also show that textual entailment is context-sensitive, nontransitive, and nonmonotonic.
    Type
    a
  18. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.08
    0.07710619 = product of:
      0.10280825 = sum of:
        0.007771606 = weight(_text_:a in 7862) [ClassicSimilarity], result of:
          0.007771606 = score(doc=7862,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 7862, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.064343356 = weight(_text_:et in 7862) [ClassicSimilarity], result of:
          0.064343356 = score(doc=7862,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 7862) [ClassicSimilarity], result of:
              0.06138658 = score(doc=7862,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 7862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7862)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
    Type
    a
  19. Nait-Baha, L.; Jackiewicz, A.; Djioua, B.; Laublet, P.: Query reformulation for information retrieval on the Web using the point of view methodology : preliminary results (2001) 0.08
    0.07710619 = product of:
      0.10280825 = sum of:
        0.007771606 = weight(_text_:a in 249) [ClassicSimilarity], result of:
          0.007771606 = score(doc=249,freq=8.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.15287387 = fieldWeight in 249, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=249)
        0.064343356 = weight(_text_:et in 249) [ClassicSimilarity], result of:
          0.064343356 = score(doc=249,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=249)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 249) [ClassicSimilarity], result of:
              0.06138658 = score(doc=249,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The work we are presenting is devoted to the information collected on the WWW. By the term collected we mean the whole process of retrieving, extracting and presenting results to the user. This research is part of the RAP (Research, Analyze, Propose) project in which we propose to combine two methods: (i) query reformulation using linguistic markers according to a given point of view; and (ii) text semantic analysis by means of contextual exploration results (Descles, 1991). The general project architecture describing the interactions between the users, the RAP system and the WWW search engines is presented in Nait-Baha et al. (1998). We will focus this paper on showing how we use linguistic markers to reformulate the queries according to a given point of view
    Type
    a
  20. Mustafa el Hadi, W.: Automatic term recognition & extraction tools : examining the new interfaces and their effective communication role in LSP discourse (1998) 0.08
    0.075399004 = product of:
      0.100532 = sum of:
        0.0054953555 = weight(_text_:a in 67) [ClassicSimilarity], result of:
          0.0054953555 = score(doc=67,freq=4.0), product of:
            0.05083672 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.044089027 = queryNorm
            0.10809815 = fieldWeight in 67, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=67)
        0.064343356 = weight(_text_:et in 67) [ClassicSimilarity], result of:
          0.064343356 = score(doc=67,freq=2.0), product of:
            0.20686594 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.044089027 = queryNorm
            0.3110389 = fieldWeight in 67, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.046875 = fieldNorm(doc=67)
        0.03069329 = product of:
          0.06138658 = sum of:
            0.06138658 = weight(_text_:al in 67) [ClassicSimilarity], result of:
              0.06138658 = score(doc=67,freq=2.0), product of:
                0.20205697 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.044089027 = queryNorm
                0.30380827 = fieldWeight in 67, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this paper we will discuss the possibility of reorienting NLP (Natural Language Processing) systems towards the extraction, not only of terms and their semantic relations, but also towards a variety of other uses; the storage, accessing and retrieving of Language for Special Purposes (LSPZ-20) lexical combinations, the provision of contexts and other information on terms through the integration of more interfaces to terminological data-bases, term managing systems and existing NLP systems. The aim of making such interfaces available is to increase the efficiency of the systems and improve the terminology-oriented text analysis. Since automatic term extraction is the backbone of many applications such as machine translation (MT), indexing, technical writing, thesaurus construction and knowledge representation developments in this area will have asignificant impact
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
    Type
    a

Languages

Types

  • a 629
  • el 75
  • m 42
  • s 22
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications