Search (715 results, page 1 of 36)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.16
    0.15702711 = product of:
      0.27479744 = sum of:
        0.062927395 = product of:
          0.18878219 = sum of:
            0.18878219 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18878219 = score(doc=562,freq=2.0), product of:
                0.33590057 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962021 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18878219 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18878219 = score(doc=562,freq=2.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.006983884 = weight(_text_:a in 562) [ClassicSimilarity], result of:
          0.006983884 = score(doc=562,freq=8.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.15287387 = fieldWeight in 562, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016103974 = product of:
          0.032207947 = sum of:
            0.032207947 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032207947 = score(doc=562,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.11122191 = product of:
      0.2595178 = sum of:
        0.062927395 = product of:
          0.18878219 = sum of:
            0.18878219 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18878219 = score(doc=862,freq=2.0), product of:
                0.33590057 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962021 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18878219 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18878219 = score(doc=862,freq=2.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.00780822 = weight(_text_:a in 862) [ClassicSimilarity], result of:
          0.00780822 = score(doc=862,freq=10.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.1709182 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.42857143 = coord(3/7)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.09040044 = product of:
      0.21093437 = sum of:
        0.18878219 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18878219 = score(doc=563,freq=2.0), product of:
            0.33590057 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962021 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.006048221 = weight(_text_:a in 563) [ClassicSimilarity], result of:
          0.006048221 = score(doc=563,freq=6.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.13239266 = fieldWeight in 563, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016103974 = product of:
          0.032207947 = sum of:
            0.032207947 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.032207947 = score(doc=563,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Zimmermann, H.H.: Wortrelationierung in der Sprachtechnik : Stilhilfen, Retrievalhilfen, Übersetzungshilfen (1992) 0.07
    0.074350044 = product of:
      0.26022515 = sum of:
        0.006983884 = weight(_text_:a in 1372) [ClassicSimilarity], result of:
          0.006983884 = score(doc=1372,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.15287387 = fieldWeight in 1372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1372)
        0.25324127 = weight(_text_:287 in 1372) [ClassicSimilarity], result of:
          0.25324127 = score(doc=1372,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            0.92055976 = fieldWeight in 1372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.09375 = fieldNorm(doc=1372)
      0.2857143 = coord(2/7)
    
    Pages
    S.287-296
    Type
    a
  5. Lein, H.: Aspekte der Realisierung des semantischen Retrievals (1994) 0.07
    0.06989849 = product of:
      0.24464472 = sum of:
        0.23649685 = weight(_text_:europa in 4323) [ClassicSimilarity], result of:
          0.23649685 = score(doc=4323,freq=2.0), product of:
            0.24612433 = queryWeight, product of:
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.03962021 = queryNorm
            0.9608837 = fieldWeight in 4323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.109375 = fieldNorm(doc=4323)
        0.008147865 = weight(_text_:a in 4323) [ClassicSimilarity], result of:
          0.008147865 = score(doc=4323,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 4323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4323)
      0.2857143 = coord(2/7)
    
    Source
    Blick Europa! Informations- und Dokumentenmanagement. Deutscher Dokumentartag 1994, Universität Trier, 27.-30.9.1994. Hrsg.: W. Neubauer
    Type
    a
  6. Rahmstorf, G.: Semantisches Information Retrieval (1994) 0.07
    0.06989849 = product of:
      0.24464472 = sum of:
        0.23649685 = weight(_text_:europa in 8879) [ClassicSimilarity], result of:
          0.23649685 = score(doc=8879,freq=2.0), product of:
            0.24612433 = queryWeight, product of:
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.03962021 = queryNorm
            0.9608837 = fieldWeight in 8879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.109375 = fieldNorm(doc=8879)
        0.008147865 = weight(_text_:a in 8879) [ClassicSimilarity], result of:
          0.008147865 = score(doc=8879,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 8879, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=8879)
      0.2857143 = coord(2/7)
    
    Source
    Blick Europa! Informations- und Dokumentenmanagement. Deutscher Dokumentartag 1994, Universität Trier, 27.-30.9.1994. Hrsg.: W. Neubauer
    Type
    a
  7. Ohly, H.P.; Binder, G.: Semantisches Retrieval mit sozialwissenschaftlichen Dokumenten : erste Erfahrungen mit RELATIO/IR (1994) 0.06
    0.059912995 = product of:
      0.20969547 = sum of:
        0.20271158 = weight(_text_:europa in 346) [ClassicSimilarity], result of:
          0.20271158 = score(doc=346,freq=2.0), product of:
            0.24612433 = queryWeight, product of:
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.03962021 = queryNorm
            0.8236146 = fieldWeight in 346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.09375 = fieldNorm(doc=346)
        0.006983884 = weight(_text_:a in 346) [ClassicSimilarity], result of:
          0.006983884 = score(doc=346,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.15287387 = fieldWeight in 346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=346)
      0.2857143 = coord(2/7)
    
    Source
    Blick Europa! Informations- und Dokumentenmanagement. Deutscher Dokumentartag 1994, Universität Trier, 27.-30.9.1994. Hrsg.: W. Neubauer
    Type
    a
  8. Rolland, M.T.: Grammatikstandardisierung im Bereich der Sprachverarbeitung (1996) 0.05
    0.0495667 = product of:
      0.17348345 = sum of:
        0.0046559228 = weight(_text_:a in 5356) [ClassicSimilarity], result of:
          0.0046559228 = score(doc=5356,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.10191591 = fieldWeight in 5356, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5356)
        0.16882752 = weight(_text_:287 in 5356) [ClassicSimilarity], result of:
          0.16882752 = score(doc=5356,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            0.6137065 = fieldWeight in 5356, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.0625 = fieldNorm(doc=5356)
      0.2857143 = coord(2/7)
    
    Source
    Nachrichten für Dokumentation. 47(1996) H.5, S.287-292
    Type
    a
  9. Dahlgren, K.: Naive semantics for natural language understanding (19??) 0.04
    0.04220688 = product of:
      0.29544815 = sum of:
        0.29544815 = weight(_text_:287 in 5302) [ClassicSimilarity], result of:
          0.29544815 = score(doc=5302,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            1.0739864 = fieldWeight in 5302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.109375 = fieldNorm(doc=5302)
      0.14285715 = coord(1/7)
    
    Isbn
    0-89838-287-4
  10. Suissa, O.; Elmalech, A.; Zhitomirsky-Geffet, M.: Text analysis using deep neural networks in digital humanities and information science (2022) 0.03
    0.032184314 = product of:
      0.1126451 = sum of:
        0.007127897 = weight(_text_:a in 491) [ClassicSimilarity], result of:
          0.007127897 = score(doc=491,freq=12.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.15602624 = fieldWeight in 491, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=491)
        0.1055172 = weight(_text_:287 in 491) [ClassicSimilarity], result of:
          0.1055172 = score(doc=491,freq=2.0), product of:
            0.27509487 = queryWeight, product of:
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.03962021 = queryNorm
            0.3835666 = fieldWeight in 491, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.943297 = idf(docFreq=115, maxDocs=44218)
              0.0390625 = fieldNorm(doc=491)
      0.2857143 = coord(2/7)
    
    Abstract
    Combining computational technologies and humanities is an ongoing effort aimed at making resources such as texts, images, audio, video, and other artifacts digitally available, searchable, and analyzable. In recent years, deep neural networks (DNN) dominate the field of automatic text analysis and natural language processing (NLP), in some cases presenting a super-human performance. DNNs are the state-of-the-art machine learning algorithms solving many NLP tasks that are relevant for Digital Humanities (DH) research, such as spell checking, language detection, entity extraction, author detection, question answering, and other tasks. These supervised algorithms learn patterns from a large number of "right" and "wrong" examples and apply them to new examples. However, using DNNs for analyzing the text resources in DH research presents two main challenges: (un)availability of training data and a need for domain adaptation. This paper explores these challenges by analyzing multiple use-cases of DH studies in recent literature and their possible solutions and lays out a practical decision model for DH experts for when and how to choose the appropriate deep learning approaches for their research. Moreover, in this paper, we aim to raise awareness of the benefits of utilizing deep learning models in the DH community.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.268-287
    Type
    a
  11. Carter-Sigglow, J.: ¬Die Rolle der Sprache bei der Informationsvermittlung (2001) 0.03
    0.029956497 = product of:
      0.10484774 = sum of:
        0.10135579 = weight(_text_:europa in 5882) [ClassicSimilarity], result of:
          0.10135579 = score(doc=5882,freq=2.0), product of:
            0.24612433 = queryWeight, product of:
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.03962021 = queryNorm
            0.4118073 = fieldWeight in 5882, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.046875 = fieldNorm(doc=5882)
        0.003491942 = weight(_text_:a in 5882) [ClassicSimilarity], result of:
          0.003491942 = score(doc=5882,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.07643694 = fieldWeight in 5882, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5882)
      0.2857143 = coord(2/7)
    
    Abstract
    In der Zeit des Internets und E-Commerce müssen auch deutsche Informationsfachleute ihre Dienste auf Englisch anbieten und sogar auf Englisch gestalten, um die internationale Community zu erreichen. Auf der anderen Seite spielt gerade auf dem Wissensmarkt Europa die sprachliche Identität der einzelnen Nationen eine große Rolle. In diesem Spannungsfeld zwischen Globalisierung und Lokalisierung arbeiten Informationsvermittler und werden dabei von Sprachspezialisten unterstützt. Man muss sich darüber im Klaren sein, dass jede Sprache - auch die für international gehaltene Sprache Englisch - eine Sprachgemeinschaft darstellt. In diesem Beitrag wird anhand aktueller Beispiele gezeigt, dass Sprache nicht nur grammatikalisch und terminologisch korrekt sein muss, sie soll auch den sprachlichen Erwartungen der Rezipienten gerecht werden, um die Grenzen der Sprachwelt nicht zu verletzen. Die Rolle der Sprachspezialisten besteht daher darin, die Informationsvermittlung zwischen diesen Welten reibungslos zu gestalten
    Type
    a
  12. Informationslinguistische Texterschließung (1986) 0.02
    0.01751827 = product of:
      0.12262789 = sum of:
        0.12262789 = weight(_text_:bib in 186) [ClassicSimilarity], result of:
          0.12262789 = score(doc=186,freq=4.0), product of:
            0.24937792 = queryWeight, product of:
              6.29421 = idf(docFreq=221, maxDocs=44218)
              0.03962021 = queryNorm
            0.49173516 = fieldWeight in 186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.29421 = idf(docFreq=221, maxDocs=44218)
              0.0390625 = fieldNorm(doc=186)
      0.14285715 = coord(1/7)
    
    Classification
    Bib D 68 / Sprachverarbeitung
    SBB
    Bib D 68 / Sprachverarbeitung
  13. Warner, A.J.: Natural language processing (1987) 0.01
    0.014930223 = product of:
      0.05225578 = sum of:
        0.0093118455 = weight(_text_:a in 337) [ClassicSimilarity], result of:
          0.0093118455 = score(doc=337,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.20383182 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.042943932 = product of:
          0.085887864 = sum of:
            0.085887864 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.085887864 = score(doc=337,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
    Type
    a
  14. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.014768131 = product of:
      0.051688455 = sum of:
        0.014112516 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
          0.014112516 = score(doc=4506,freq=6.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.3089162 = fieldWeight in 4506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.075151876 = score(doc=4506,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  15. Experimentelles und praktisches Information Retrieval : Festschrift für Gerhard Lustig (1992) 0.01
    0.0144794 = product of:
      0.10135579 = sum of:
        0.10135579 = weight(_text_:europa in 4) [ClassicSimilarity], result of:
          0.10135579 = score(doc=4,freq=2.0), product of:
            0.24612433 = queryWeight, product of:
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.03962021 = queryNorm
            0.4118073 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2120905 = idf(docFreq=240, maxDocs=44218)
              0.046875 = fieldNorm(doc=4)
      0.14285715 = coord(1/7)
    
    Content
    Enthält die Beiträge: SALTON, G.: Effective text understanding in information retrieval; KRAUSE, J.: Intelligentes Information retrieval; FUHR, N.: Konzepte zur Gestaltung zukünftiger Information-Retrieval-Systeme; HÜTHER, H.: Überlegungen zu einem mathematischen Modell für die Type-Token-, die Grundform-Token und die Grundform-Type-Relation; KNORZ, G.: Automatische Generierung inferentieller Links in und zwischen Hyperdokumenten; KONRAD, E.: Zur Effektivitätsbewertung von Information-Retrieval-Systemen; HENRICHS, N.: Retrievalunterstützung durch automatisch generierte Wortfelder; LÜCK, W., W. RITTBERGER u. M. SCHWANTNER: Der Einsatz des Automatischen Indexierungs- und Retrieval-System (AIR) im Fachinformationszentrum Karlsruhe; REIMER, U.: Verfahren der Automatischen Indexierung. Benötigtes Vorwissen und Ansätze zu seiner automatischen Akquisition: Ein Überblick; ENDRES-NIGGEMEYER, B.: Dokumentrepräsentation: Ein individuelles prozedurales Modell des Abstracting, des Indexierens und Klassifizierens; SEELBACH, D.: Zur Entwicklung von zwei- und mehrsprachigen lexikalischen Datenbanken und Terminologiedatenbanken; ZIMMERMANN, H.: Der Einfluß der Sprachbarrieren in Europa und Möglichkeiten zu ihrer Minderung; LENDERS, W.: Wörter zwischen Welt und Wissen; PANYR, J.: Frames, Thesauri und automatische Klassifikation (Clusteranalyse): HAHN, U.: Forschungsstrategien und Erkenntnisinteressen in der anwendungsorientierten automatischen Sprachverarbeitung. Überlegungen zu einer ingenieurorientierten Computerlinguistik; KUHLEN, R.: Hypertext und Information Retrieval - mehr als Browsing und Suche.
  16. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.013063944 = product of:
      0.045723803 = sum of:
        0.008147865 = weight(_text_:a in 3164) [ClassicSimilarity], result of:
          0.008147865 = score(doc=3164,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 3164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3164)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.075151876 = score(doc=3164,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
    Type
    a
  17. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.013063944 = product of:
      0.045723803 = sum of:
        0.008147865 = weight(_text_:a in 6672) [ClassicSimilarity], result of:
          0.008147865 = score(doc=6672,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 6672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6672)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.075151876 = score(doc=6672,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    31. 7.1996 9:22:19
    Type
    a
  18. New tools for human translators (1997) 0.01
    0.013063944 = product of:
      0.045723803 = sum of:
        0.008147865 = weight(_text_:a in 1179) [ClassicSimilarity], result of:
          0.008147865 = score(doc=1179,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 1179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1179)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.075151876 = score(doc=1179,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A special issue devoted to the theme of new tools for human tranlators
    Date
    31. 7.1996 9:22:19
  19. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.013063944 = product of:
      0.045723803 = sum of:
        0.008147865 = weight(_text_:a in 3117) [ClassicSimilarity], result of:
          0.008147865 = score(doc=3117,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3117)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.075151876 = score(doc=3117,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    28. 2.1999 10:48:22
    Type
    a
  20. ¬Der Student aus dem Computer (2023) 0.01
    0.013063944 = product of:
      0.045723803 = sum of:
        0.008147865 = weight(_text_:a in 1079) [ClassicSimilarity], result of:
          0.008147865 = score(doc=1079,freq=2.0), product of:
            0.04568396 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03962021 = queryNorm
            0.17835285 = fieldWeight in 1079, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1079)
        0.037575938 = product of:
          0.075151876 = sum of:
            0.075151876 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.075151876 = score(doc=1079,freq=2.0), product of:
                0.13874322 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962021 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    27. 1.2023 16:22:55
    Type
    a

Languages

Types

  • a 629
  • el 75
  • m 45
  • s 25
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications