Search (548 results, page 1 of 28)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10440264 = product of:
      0.23490594 = sum of:
        0.05238601 = product of:
          0.15715802 = sum of:
            0.15715802 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.15715802 = score(doc=562,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.011955625 = weight(_text_:of in 562) [ClassicSimilarity], result of:
          0.011955625 = score(doc=562,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.23179851 = fieldWeight in 562, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.15715802 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.15715802 = score(doc=562,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.026812578 = score(doc=562,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.07
    0.07383322 = product of:
      0.22149965 = sum of:
        0.05238601 = product of:
          0.15715802 = sum of:
            0.15715802 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.15715802 = score(doc=862,freq=2.0), product of:
                0.2796316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03298316 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.011955625 = weight(_text_:of in 862) [ClassicSimilarity], result of:
          0.011955625 = score(doc=862,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.23179851 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15715802 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15715802 = score(doc=862,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(3/9)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.07
    0.0707307 = product of:
      0.15914407 = sum of:
        0.030001212 = product of:
          0.060002424 = sum of:
            0.060002424 = weight(_text_:headings in 1139) [ClassicSimilarity], result of:
              0.060002424 = score(doc=1139,freq=2.0), product of:
                0.15996648 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.03298316 = queryNorm
                0.37509373 = fieldWeight in 1139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
        0.030546555 = weight(_text_:library in 1139) [ClassicSimilarity], result of:
          0.030546555 = score(doc=1139,freq=6.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.3522223 = fieldWeight in 1139, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.016503768 = weight(_text_:of in 1139) [ClassicSimilarity], result of:
          0.016503768 = score(doc=1139,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31997898 = fieldWeight in 1139, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.08209253 = weight(_text_:congress in 1139) [ClassicSimilarity], result of:
          0.08209253 = score(doc=1139,freq=4.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.5217527 = fieldWeight in 1139, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
      0.44444445 = coord(4/9)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.06
    0.061895706 = product of:
      0.18568711 = sum of:
        0.015122802 = weight(_text_:of in 563) [ClassicSimilarity], result of:
          0.015122802 = score(doc=563,freq=16.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2932045 = fieldWeight in 563, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.15715802 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.15715802 = score(doc=563,freq=2.0), product of:
            0.2796316 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03298316 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.026812578 = score(doc=563,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
    Imprint
    Guelph, Ontario : University of Guelph
  5. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.04
    0.04007451 = product of:
      0.12022352 = sum of:
        0.015116624 = weight(_text_:library in 75) [ClassicSimilarity], result of:
          0.015116624 = score(doc=75,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.17430481 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.014146087 = weight(_text_:of in 75) [ClassicSimilarity], result of:
          0.014146087 = score(doc=75,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2742677 = fieldWeight in 75, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.09096081 = sum of:
          0.06414823 = weight(_text_:etc in 75) [ClassicSimilarity], result of:
            0.06414823 = score(doc=75,freq=2.0), product of:
              0.17865302 = queryWeight, product of:
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.03298316 = queryNorm
              0.35906604 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4164915 = idf(docFreq=533, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
          0.026812578 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
            0.026812578 = score(doc=75,freq=2.0), product of:
              0.11550141 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03298316 = queryNorm
              0.23214069 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
      0.33333334 = coord(3/9)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  6. Franke-Maier, M.: Computerlinguistik und Bibliotheken : Editorial (2016) 0.04
    0.035531204 = product of:
      0.07994521 = sum of:
        0.021429438 = product of:
          0.042858876 = sum of:
            0.042858876 = weight(_text_:headings in 3206) [ClassicSimilarity], result of:
              0.042858876 = score(doc=3206,freq=2.0), product of:
                0.15996648 = queryWeight, product of:
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2679241 = fieldWeight in 3206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.849944 = idf(docFreq=940, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3206)
          0.5 = coord(1/2)
        0.0125971865 = weight(_text_:library in 3206) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=3206,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 3206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3206)
        0.0044555985 = weight(_text_:of in 3206) [ClassicSimilarity], result of:
          0.0044555985 = score(doc=3206,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.086386204 = fieldWeight in 3206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3206)
        0.041462988 = weight(_text_:congress in 3206) [ClassicSimilarity], result of:
          0.041462988 = score(doc=3206,freq=2.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.26352492 = fieldWeight in 3206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3206)
      0.44444445 = coord(4/9)
    
    Abstract
    Vor 50 Jahren, im Februar 1966, wies Floyd M. Cammack auf den Zusammenhang von "Linguistics and Libraries" hin. Er ging dabei von dem Eintrag für "Linguistics" in den Library of Congress Subject Headings (LCSH) von 1957 aus, der als Verweis "See Language and Languages; Philology; Philology, Comparative" enthielt. Acht Jahre später kamen unter dem Schlagwort "Language and Languages" Ergänzungen wie "language data processing", "automatic indexing", "machine translation" und "psycholinguistics" hinzu. Für Cammack zeigt sich hier ein Netz komplexer Wechselbeziehungen, die unter dem Begriff "Linguistics" zusammengefasst werden sollten. Dieses System habe wichtigen Einfluss auf alle, die mit dem Sammeln, Organisieren, Speichern und Wiederauffinden von Informationen befasst seien. (Cammack 1966:73). Hier liegt - im übertragenen Sinne - ein Heft vor Ihnen, in dem es um computerlinguistische Verfahren in Bibliotheken geht. Letztlich geht es um eine Versachlichung der Diskussion, um den Stellenwert der Inhaltserschliessung und die Rekalibrierung ihrer Wertschätzung in Zeiten von Mega-Indizes und Big Data. Der derzeitige Widerspruch zwischen dem Wunsch nach relevanter Treffermenge in Rechercheoberflächen vs. der Erfahrung des Relevanz-Rankings ist zu lösen. Explizit auch die Frage, wie oft wir von letzterem enttäuscht wurden und was zu tun ist, um das Verhältnis von recall und precision wieder in ein angebrachtes Gleichgewicht zu bringen. Unsere Nutzerinnen und Nutzer werden es uns danken.
  7. Mustafa el Hadi, W.; Jouis, C.: Evaluating natural language processing systems as a tool for building terminological databases (1996) 0.03
    0.030321253 = product of:
      0.09096376 = sum of:
        0.01763606 = weight(_text_:library in 5191) [ClassicSimilarity], result of:
          0.01763606 = score(doc=5191,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.20335563 = fieldWeight in 5191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5191)
        0.015279518 = weight(_text_:of in 5191) [ClassicSimilarity], result of:
          0.015279518 = score(doc=5191,freq=12.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.29624295 = fieldWeight in 5191, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5191)
        0.05804818 = weight(_text_:congress in 5191) [ClassicSimilarity], result of:
          0.05804818 = score(doc=5191,freq=2.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.36893487 = fieldWeight in 5191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5191)
      0.33333334 = coord(3/9)
    
    Abstract
    Natural language processing systems use various modules in order to identify terms or concept names and the logico-semantic relations they entertain. The approaches involved in corpus analysis are either based on morpho-syntactic analysis, statistical analysis, semantic analysis, recent connexionist models or any combination of 2 or more of these approaches. This paper will examine the capacity of natural language processing systems to create databases from extensive textual data. We are endeavouring to evaluate the contribution of these systems, their advantages and their shortcomings
    Source
    Knowledge organization and change: Proceedings of the Fourth International ISKO Conference, 15-18 July 1996, Library of Congress, Washington, DC. Ed.: R. Green
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.026343048 = product of:
      0.07902914 = sum of:
        0.03527212 = weight(_text_:library in 4506) [ClassicSimilarity], result of:
          0.03527212 = score(doc=4506,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.40671125 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.012475675 = weight(_text_:of in 4506) [ClassicSimilarity], result of:
          0.012475675 = score(doc=4506,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24188137 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.03128134 = product of:
          0.06256268 = sum of:
            0.06256268 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.06256268 = score(doc=4506,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
  9. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.02
    0.022978272 = product of:
      0.06893481 = sum of:
        0.015940834 = weight(_text_:of in 7415) [ClassicSimilarity], result of:
          0.015940834 = score(doc=7415,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3090647 = fieldWeight in 7415, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.03511893 = product of:
          0.07023786 = sum of:
            0.07023786 = weight(_text_:problems in 7415) [ClassicSimilarity], result of:
              0.07023786 = score(doc=7415,freq=4.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.5159344 = fieldWeight in 7415, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.035750106 = score(doc=7415,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  10. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.02
    0.021276832 = product of:
      0.063830495 = sum of:
        0.030546555 = weight(_text_:library in 2345) [ClassicSimilarity], result of:
          0.030546555 = score(doc=2345,freq=6.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.3522223 = fieldWeight in 2345, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.01764327 = weight(_text_:of in 2345) [ClassicSimilarity], result of:
          0.01764327 = score(doc=2345,freq=16.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.34207192 = fieldWeight in 2345, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.01564067 = product of:
          0.03128134 = sum of:
            0.03128134 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.03128134 = score(doc=2345,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  11. Malone, L.C.; Driscoll, J.R.; Pepe, J.W.: Modeling the performance of an automated keywording system (1991) 0.02
    0.019127883 = product of:
      0.08607547 = sum of:
        0.012347717 = weight(_text_:of in 6682) [ClassicSimilarity], result of:
          0.012347717 = score(doc=6682,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.23940048 = fieldWeight in 6682, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6682)
        0.07372776 = product of:
          0.14745551 = sum of:
            0.14745551 = weight(_text_:exercises in 6682) [ClassicSimilarity], result of:
              0.14745551 = score(doc=6682,freq=2.0), product of:
                0.2345736 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03298316 = queryNorm
                0.62861085 = fieldWeight in 6682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6682)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Presents a model for predicting the performance of a computerised keyword assigning and indexing system. Statistical procedures were investigated in order to protect against incorrect keywording by the system behaving as an expert system designed to mimic the behaviour of human keyword indexers and representing lessons learned from military exercises and operations
  12. Morris, V.: Automated language identification of bibliographic resources (2020) 0.02
    0.01849762 = product of:
      0.05549286 = sum of:
        0.020155499 = weight(_text_:library in 5749) [ClassicSimilarity], result of:
          0.020155499 = score(doc=5749,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.23240642 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.017462308 = weight(_text_:of in 5749) [ClassicSimilarity], result of:
          0.017462308 = score(doc=5749,freq=12.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.33856338 = fieldWeight in 5749, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.035750106 = score(doc=5749,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  13. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.018104525 = product of:
      0.04073518 = sum of:
        0.00881803 = weight(_text_:library in 1616) [ClassicSimilarity], result of:
          0.00881803 = score(doc=1616,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.10167781 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.013232451 = weight(_text_:of in 1616) [ClassicSimilarity], result of:
          0.013232451 = score(doc=1616,freq=36.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.25655392 = fieldWeight in 1616, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.010864365 = product of:
          0.02172873 = sum of:
            0.02172873 = weight(_text_:problems in 1616) [ClassicSimilarity], result of:
              0.02172873 = score(doc=1616,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.15960906 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
        0.007820335 = product of:
          0.01564067 = sum of:
            0.01564067 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.01564067 = score(doc=1616,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.671-682
  14. Kay, M.: ¬The proper place of men and machines in language translation (1997) 0.02
    0.017957723 = product of:
      0.053873166 = sum of:
        0.016503768 = weight(_text_:of in 1178) [ClassicSimilarity], result of:
          0.016503768 = score(doc=1178,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31997898 = fieldWeight in 1178, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.02172873 = product of:
          0.04345746 = sum of:
            0.04345746 = weight(_text_:problems in 1178) [ClassicSimilarity], result of:
              0.04345746 = score(doc=1178,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.31921813 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
        0.01564067 = product of:
          0.03128134 = sum of:
            0.03128134 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.03128134 = score(doc=1178,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2708308 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Machine translation stands no chance of filling actual needs for translation because, although there has been progress in relevant areas of computer science, advance in linguistics have not touched the core problems. Cooperative man-machine systems need to be developed, Proposes a translator's amanuensis, incorporating into a word processor some simple facilities peculiar to translation. Gradual enhancements of such a system could lead to the original goal of machine translation
    Content
    Reprint of a Xerox PARC Working Paper which appeared in 1980
    Date
    31. 7.1996 9:22:19
    Footnote
    Contribution to a special issue devoted to the theme of new tools for human translators
  15. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.02
    0.016185416 = product of:
      0.048556246 = sum of:
        0.01763606 = weight(_text_:library in 156) [ClassicSimilarity], result of:
          0.01763606 = score(doc=156,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.20335563 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.015279518 = weight(_text_:of in 156) [ClassicSimilarity], result of:
          0.015279518 = score(doc=156,freq=12.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.29624295 = fieldWeight in 156, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.01564067 = product of:
          0.03128134 = sum of:
            0.03128134 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.03128134 = score(doc=156,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  16. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.02
    0.015741654 = product of:
      0.04722496 = sum of:
        0.01763606 = weight(_text_:library in 3840) [ClassicSimilarity], result of:
          0.01763606 = score(doc=3840,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.20335563 = fieldWeight in 3840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.01394823 = weight(_text_:of in 3840) [ClassicSimilarity], result of:
          0.01394823 = score(doc=3840,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2704316 = fieldWeight in 3840, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.01564067 = product of:
          0.03128134 = sum of:
            0.03128134 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.03128134 = score(doc=3840,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  17. Mustafa el Hadi, W.; Jouis, C.: Natural language processing-based systems for terminological construction and their contribution to information retrieval (1996) 0.02
    0.015671968 = product of:
      0.07052386 = sum of:
        0.012475675 = weight(_text_:of in 6331) [ClassicSimilarity], result of:
          0.012475675 = score(doc=6331,freq=8.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24188137 = fieldWeight in 6331, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6331)
        0.05804818 = weight(_text_:congress in 6331) [ClassicSimilarity], result of:
          0.05804818 = score(doc=6331,freq=2.0), product of:
            0.15733992 = queryWeight, product of:
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.03298316 = queryNorm
            0.36893487 = fieldWeight in 6331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7703104 = idf(docFreq=1018, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6331)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper will survey the capacity of natural language processing (NLP) systems to identify terms or concept names related to a specific field of knowledge (construction of a reference terminology) and the logico-semantic relations they entertain. The scope of our study will be limited to French language NLP systems whose purpose is automatic terms identification with textual area-grounded terms providing access keys to information
    Source
    TKE'96: Terminology and knowledge engineering. Proceedings 4th International Congress on Terminology and Knowledge Engineering, 26.-28.8.1996, Wien. Ed.: C. Galinski u. K.-D. Schmitz
  18. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.014820512 = product of:
      0.044461537 = sum of:
        0.0125971865 = weight(_text_:library in 2541) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=2541,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.016064888 = weight(_text_:of in 2541) [ClassicSimilarity], result of:
          0.016064888 = score(doc=2541,freq=26.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31146988 = fieldWeight in 2541, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.015799463 = product of:
          0.031598926 = sum of:
            0.031598926 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.031598926 = score(doc=2541,freq=4.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  19. Chandrasekar, R.; Bangalore, S.: Glean : using syntactic information in document filtering (2002) 0.01
    0.014298419 = product of:
      0.042895254 = sum of:
        0.0125971865 = weight(_text_:library in 4257) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=4257,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 4257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4257)
        0.014777548 = weight(_text_:of in 4257) [ClassicSimilarity], result of:
          0.014777548 = score(doc=4257,freq=22.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.28651062 = fieldWeight in 4257, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4257)
        0.0155205205 = product of:
          0.031041041 = sum of:
            0.031041041 = weight(_text_:problems in 4257) [ClassicSimilarity], result of:
              0.031041041 = score(doc=4257,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.22801295 = fieldWeight in 4257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4257)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    In today's networked world, a huge amount of data is available in machine-processable form. Likewise, there are any number of search engines and specialized information retrieval (IR) programs that seek to extract relevant information from these data repositories. Most IR systems and Web search engines have been designed for speed and tend to maximize the quantity of information (recall) rather than the relevance of the information (precision) to the query. As a result, search engine users get inundated with information for practically any query, and are forced to scan a large number of potentially relevant items to get to the information of interest. The Holy Grail of IR is to somehow retrieve those and only those documents pertinent to the user's query. Polysemy and synonymy - the fact that often there are several meanings for a word or phrase, and likewise, many ways to express a conceptmake this a very hard task. While conventional IR systems provide usable solutions, there are a number of open problems to be solved, in areas such as syntactic processing, semantic analysis, and user modeling, before we develop systems that "understand" user queries and text collections. Meanwhile, we can use tools and techniques available today to improve the precision of retrieval. In particular, using the approach described in this article, we can approximate understanding using the syntactic structure and patterns of language use that is latent in documents to make IR more effective.
    Source
    Encyclopedia of library and information science. Vol.71, [=Suppl.34]
  20. Smeaton, A.F.: Natural language processing used in information retrieval tasks : an overview of achievements to date (1995) 0.01
    0.011758976 = product of:
      0.05291539 = sum of:
        0.03527212 = weight(_text_:library in 1265) [ClassicSimilarity], result of:
          0.03527212 = score(doc=1265,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.40671125 = fieldWeight in 1265, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=1265)
        0.01764327 = weight(_text_:of in 1265) [ClassicSimilarity], result of:
          0.01764327 = score(doc=1265,freq=4.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.34207192 = fieldWeight in 1265, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=1265)
      0.22222222 = coord(2/9)
    
    Source
    Encyclopedia of library and information science. Vol.55, [=Suppl.18]

Languages

Types

  • a 463
  • el 57
  • m 40
  • s 21
  • x 13
  • p 7
  • b 2
  • d 1
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications