Search (581 results, page 1 of 30)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.26
    0.25665343 = product of:
      0.38498014 = sum of:
        0.06688428 = product of:
          0.20065282 = sum of:
            0.20065282 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20065282 = score(doc=562,freq=2.0), product of:
                0.35702205 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042111535 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20065282 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20065282 = score(doc=562,freq=2.0), product of:
            0.35702205 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042111535 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.10032641 = product of:
          0.20065282 = sum of:
            0.20065282 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20065282 = score(doc=562,freq=2.0), product of:
                0.35702205 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042111535 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
        0.017116595 = product of:
          0.03423319 = sum of:
            0.03423319 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03423319 = score(doc=562,freq=2.0), product of:
                0.14746742 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042111535 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.18
    0.18393177 = product of:
      0.36786354 = sum of:
        0.06688428 = product of:
          0.20065282 = sum of:
            0.20065282 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20065282 = score(doc=862,freq=2.0), product of:
                0.35702205 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042111535 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20065282 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20065282 = score(doc=862,freq=2.0), product of:
            0.35702205 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042111535 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10032641 = product of:
          0.20065282 = sum of:
            0.20065282 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20065282 = score(doc=862,freq=2.0), product of:
                0.35702205 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042111535 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.11
    0.11318619 = product of:
      0.22637238 = sum of:
        0.20065282 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20065282 = score(doc=563,freq=2.0), product of:
            0.35702205 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042111535 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.008602964 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.008602964 = score(doc=563,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017116595 = product of:
          0.03423319 = sum of:
            0.03423319 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03423319 = score(doc=563,freq=2.0), product of:
                0.14746742 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042111535 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.11
    0.10661819 = product of:
      0.21323638 = sum of:
        0.007169136 = weight(_text_:information in 190) [ClassicSimilarity], result of:
          0.007169136 = score(doc=190,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.09697737 = fieldWeight in 190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=190)
        0.07044517 = weight(_text_:und in 190) [ClassicSimilarity], result of:
          0.07044517 = score(doc=190,freq=76.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.7547594 = fieldWeight in 190, product of:
              8.717798 = tf(freq=76.0), with freq of:
                76.0 = termFreq=76.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=190)
        0.13562207 = sum of:
          0.10709441 = weight(_text_:allgemeines in 190) [ClassicSimilarity], result of:
            0.10709441 = score(doc=190,freq=4.0), product of:
              0.24026412 = queryWeight, product of:
                5.705423 = idf(docFreq=399, maxDocs=44218)
                0.042111535 = queryNorm
              0.44573617 = fieldWeight in 190, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.705423 = idf(docFreq=399, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
          0.028527658 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.028527658 = score(doc=190,freq=2.0), product of:
              0.14746742 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042111535 = queryNorm
              0.19345059 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
      0.5 = coord(3/6)
    
    BK
    18.00 Einzelne Sprachen und Literaturen allgemein
    Classification
    ES 940 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Sprachanalyse
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ES 945 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Spracherkennung
    ET 580 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Lexikologie (diachrone und synchrone) / Lexikographie
    18.00 Einzelne Sprachen und Literaturen allgemein
    Date
    14. 4.2007 10:04:22
    RVK
    ES 940 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Sprachanalyse
    ET 400 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Allgemeines
    ES 945 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Spracherkennung
    ET 580 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Einzelgebiete der Sprachwissenschaft, Sprachbeschreibung / Semantik und Lexikologie / Lexikologie (diachrone und synchrone) / Lexikographie
    Series
    Sprache und Information ; 33
  5. Seelbach, D.: Computerlinguistik und Dokumentation : keyphrases in Dokumentationsprozessen (1975) 0.08
    0.07904574 = product of:
      0.15809149 = sum of:
        0.012166427 = weight(_text_:information in 299) [ClassicSimilarity], result of:
          0.012166427 = score(doc=299,freq=4.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.16457605 = fieldWeight in 299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=299)
        0.059774708 = weight(_text_:und in 299) [ClassicSimilarity], result of:
          0.059774708 = score(doc=299,freq=38.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.6404345 = fieldWeight in 299, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=299)
        0.08615034 = product of:
          0.17230068 = sum of:
            0.17230068 = weight(_text_:dokumentation in 299) [ClassicSimilarity], result of:
              0.17230068 = score(doc=299,freq=16.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.875878 = fieldWeight in 299, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Classification
    ES 950 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Redeanlyse
    ES 955 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Referatherstellung, linguistische Dokumentation und Information
    Imprint
    München : Verlag Dokumentation
    RSWK
    Dokumentation (BVB)
    Linguistische Datenverarbeitung / Dokumentation (BVB)
    RVK
    ES 950 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Redeanlyse
    ES 955 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Referatherstellung, linguistische Dokumentation und Information
    Subject
    Dokumentation (BVB)
    Linguistische Datenverarbeitung / Dokumentation (BVB)
  6. Lustig, G.: ¬Das Projekt WAI : Wörterbuchentwicklung für automatisches Indexing (1982) 0.06
    0.061570793 = product of:
      0.12314159 = sum of:
        0.020073581 = weight(_text_:information in 33) [ClassicSimilarity], result of:
          0.020073581 = score(doc=33,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.27153665 = fieldWeight in 33, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=33)
        0.031997606 = weight(_text_:und in 33) [ClassicSimilarity], result of:
          0.031997606 = score(doc=33,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.34282678 = fieldWeight in 33, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=33)
        0.0710704 = product of:
          0.1421408 = sum of:
            0.1421408 = weight(_text_:dokumentation in 33) [ClassicSimilarity], result of:
              0.1421408 = score(doc=33,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.72256243 = fieldWeight in 33, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.109375 = fieldNorm(doc=33)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Source
    Deutscher Dokumentartag 1981, Mainz, 5.-8.10.1981: Kleincomputer in Information und Dokumentation. Bearb.: H. Strohl-Goebel
  7. Gombocz, W.L.: Stichwort oder Schlagwort versus Textwort : Grazer und Düsseldorfer Philosophie-Dokumentation und -Information nach bzw. gemäß Norbert Henrichs (2000) 0.06
    0.05837641 = product of:
      0.11675282 = sum of:
        0.02027738 = weight(_text_:information in 3413) [ClassicSimilarity], result of:
          0.02027738 = score(doc=3413,freq=4.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.27429342 = fieldWeight in 3413, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3413)
        0.045710865 = weight(_text_:und in 3413) [ClassicSimilarity], result of:
          0.045710865 = score(doc=3413,freq=8.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.48975256 = fieldWeight in 3413, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3413)
        0.050764572 = product of:
          0.101529144 = sum of:
            0.101529144 = weight(_text_:dokumentation in 3413) [ClassicSimilarity], result of:
              0.101529144 = score(doc=3413,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.516116 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Imprint
    Düsseldorf : Universitäts- und Landesbibliothek
    Series
    Schriften der Universitäts- und Landesbibliothek Düsseldorf; 32
    Source
    Auf dem Weg zur Informationskultur: Wa(h)re Information? Festschrift für Norbert Henrichs zum 65. Geburtstag, Hrsg.: T.A. Schröder
  8. Werner, H.: Indexierung auf linguistischer Grundlage am Beispiel von JUDO-DS(1) (1982) 0.05
    0.052774966 = product of:
      0.10554993 = sum of:
        0.017205928 = weight(_text_:information in 3017) [ClassicSimilarity], result of:
          0.017205928 = score(doc=3017,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.23274569 = fieldWeight in 3017, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3017)
        0.027426518 = weight(_text_:und in 3017) [ClassicSimilarity], result of:
          0.027426518 = score(doc=3017,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.29385152 = fieldWeight in 3017, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=3017)
        0.06091749 = product of:
          0.12183498 = sum of:
            0.12183498 = weight(_text_:dokumentation in 3017) [ClassicSimilarity], result of:
              0.12183498 = score(doc=3017,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.6193392 = fieldWeight in 3017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3017)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Source
    Deutscher Dokumentartag 1981, Mainz, 5.-8.10.1981: Kleincomputer in Information und Dokumentation. Bearb.: H. Strohl-Goebel
  9. Zimmermann, H.H.: Linguistisch-technische Aspekte der maschinellen Übersetzung (1990) 0.05
    0.052774966 = product of:
      0.10554993 = sum of:
        0.017205928 = weight(_text_:information in 614) [ClassicSimilarity], result of:
          0.017205928 = score(doc=614,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.23274569 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=614)
        0.027426518 = weight(_text_:und in 614) [ClassicSimilarity], result of:
          0.027426518 = score(doc=614,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.29385152 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=614)
        0.06091749 = product of:
          0.12183498 = sum of:
            0.12183498 = weight(_text_:dokumentation in 614) [ClassicSimilarity], result of:
              0.12183498 = score(doc=614,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.6193392 = fieldWeight in 614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.09375 = fieldNorm(doc=614)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Source
    Grundlagen der praktischen Information und Dokumentation: ein Handbuch zur Einführung in die fachliche Informationsarbeit. 3. Aufl. Hrsg.: M. Buder u.a. Bd.1
  10. Hahn, U.: Methodische Grundlagen der Informationslinguistik (2013) 0.05
    0.048712656 = product of:
      0.09742531 = sum of:
        0.014338272 = weight(_text_:information in 719) [ClassicSimilarity], result of:
          0.014338272 = score(doc=719,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.19395474 = fieldWeight in 719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=719)
        0.032322463 = weight(_text_:und in 719) [ClassicSimilarity], result of:
          0.032322463 = score(doc=719,freq=4.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.34630734 = fieldWeight in 719, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=719)
        0.050764572 = product of:
          0.101529144 = sum of:
            0.101529144 = weight(_text_:dokumentation in 719) [ClassicSimilarity], result of:
              0.101529144 = score(doc=719,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.516116 = fieldWeight in 719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.078125 = fieldNorm(doc=719)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  11. Kuhlen, R.: Experimentelle Morphologie in der Informationswissenschaft (1977) 0.05
    0.04587087 = product of:
      0.09174174 = sum of:
        0.014194165 = weight(_text_:information in 4253) [ClassicSimilarity], result of:
          0.014194165 = score(doc=4253,freq=4.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.1920054 = fieldWeight in 4253, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4253)
        0.015998803 = weight(_text_:und in 4253) [ClassicSimilarity], result of:
          0.015998803 = score(doc=4253,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.17141339 = fieldWeight in 4253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4253)
        0.061548777 = product of:
          0.123097554 = sum of:
            0.123097554 = weight(_text_:dokumentation in 4253) [ClassicSimilarity], result of:
              0.123097554 = score(doc=4253,freq=6.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.62575746 = fieldWeight in 4253, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4253)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    Zugl.: Regensburg, Univ., Diss. u.d.T.: Kuhlen, Rainer: Flexine und Derivative in der maschinellen Verarbeitung englischer Texte
    Imprint
    München : Verlag Dokumentation
    LCSH
    Information storage and retrieval systems
    RSWK
    Automatische Sprachanalyse / Dokumentation
    Subject
    Automatische Sprachanalyse / Dokumentation
    Information storage and retrieval systems
  12. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.04
    0.04299628 = product of:
      0.08599256 = sum of:
        0.024332855 = weight(_text_:information in 4483) [ClassicSimilarity], result of:
          0.024332855 = score(doc=4483,freq=4.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.3291521 = fieldWeight in 4483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.027426518 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.027426518 = score(doc=4483,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.03423319 = product of:
          0.06846638 = sum of:
            0.06846638 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.06846638 = score(doc=4483,freq=2.0), product of:
                0.14746742 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042111535 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Schwarz, C.: Freitextrecherche: Grenzen und Möglichkeiten (1982) 0.04
    0.039264005 = product of:
      0.11779201 = sum of:
        0.036568694 = weight(_text_:und in 1349) [ClassicSimilarity], result of:
          0.036568694 = score(doc=1349,freq=2.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.39180204 = fieldWeight in 1349, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=1349)
        0.08122332 = product of:
          0.16244663 = sum of:
            0.16244663 = weight(_text_:dokumentation in 1349) [ClassicSimilarity], result of:
              0.16244663 = score(doc=1349,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.82578564 = fieldWeight in 1349, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.125 = fieldNorm(doc=1349)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Nachrichten für Dokumentation. 33(1982), S.228-236
  14. Schwarz, C.: Linguistische Hilfsmittel beim Information Retrieval (1984) 0.03
    0.03472152 = product of:
      0.104164556 = sum of:
        0.022941235 = weight(_text_:information in 545) [ClassicSimilarity], result of:
          0.022941235 = score(doc=545,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.3103276 = fieldWeight in 545, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=545)
        0.08122332 = product of:
          0.16244663 = sum of:
            0.16244663 = weight(_text_:dokumentation in 545) [ClassicSimilarity], result of:
              0.16244663 = score(doc=545,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.82578564 = fieldWeight in 545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.125 = fieldNorm(doc=545)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Nachrichten für Dokumentation. 35(1984), S.179-182
  15. Winiwarter, W.: Bewältigung der Informationsflut : Stand der Computerlinguistik (1996) 0.03
    0.034098856 = product of:
      0.06819771 = sum of:
        0.010036791 = weight(_text_:information in 4099) [ClassicSimilarity], result of:
          0.010036791 = score(doc=4099,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.13576832 = fieldWeight in 4099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
        0.022625724 = weight(_text_:und in 4099) [ClassicSimilarity], result of:
          0.022625724 = score(doc=4099,freq=4.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.24241515 = fieldWeight in 4099, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
        0.0355352 = product of:
          0.0710704 = sum of:
            0.0710704 = weight(_text_:dokumentation in 4099) [ClassicSimilarity], result of:
              0.0710704 = score(doc=4099,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.36128122 = fieldWeight in 4099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4099)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    In vielen Bereichen der Computerlinguistik kann die Situation konstatiert werden, daß eine anfängliche euphorische Aufbruchsstimmung einer resignativen Stagnationsphase gewichen ist. In gleichem Maße wurde damit aber auch Raum für eine realistischere Sicht der Dinge geschaffen, welche von 'Toy Systems' Abschied nimmt und sich praktischen Fragestellungen widmet. Als eines der dringlichsten Probleme ist hier die effiziente Bewältigung der von Tag zu Tag größer werdenden Informationsflut anzusehen. Die vorliegende Arbeit gibt einen aktuellen Überblick über die derzeit zur Verfügung stehenden Techniken. Der Schwerpunkt wird hierbei auf Information Extraction Systeme gelegt, die auf der Grundlage internationaler Evaluierungsprogramme und allgemein verfügbarer linguistischer Ressourcen bereits beachtliche Erfolge erzielen konnten
    Source
    Nachrichten für Dokumentation. 47(1996) H.3, S.131-150
  16. Zimmermann, H.H.: Maschinelle und Computergestützte Übersetzung (2004) 0.03
    0.03324411 = product of:
      0.06648822 = sum of:
        0.008602964 = weight(_text_:information in 2943) [ClassicSimilarity], result of:
          0.008602964 = score(doc=2943,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.116372846 = fieldWeight in 2943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2943)
        0.027426518 = weight(_text_:und in 2943) [ClassicSimilarity], result of:
          0.027426518 = score(doc=2943,freq=8.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.29385152 = fieldWeight in 2943, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2943)
        0.030458745 = product of:
          0.06091749 = sum of:
            0.06091749 = weight(_text_:dokumentation in 2943) [ClassicSimilarity], result of:
              0.06091749 = score(doc=2943,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.3096696 = fieldWeight in 2943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Unter Maschineller Übersetzung (Machine Translation, MT) wird im Folgenden die vollautomatische Übersetzung eines Textes in natürlicher Sprache in eine andere natürliche Sprache verstanden. Unter Human-Übersetzung (Human Translation, HT) wird die intellektuelle Übersetzung eines Textes mit oder ohne maschinelle lexikalische Hilfen mit oder ohne Textverarbeitung verstanden. Unter computergestützter bzw computerunterstützter Übersetzung (CAT) wird einerseits eine intellektuelle Übersetzung verstanden, die auf einer maschinellen Vorübersetzung/Rohübersetzung (MT) aufbaut, die nachfolgend intellektuell nachbereitet wird (Postedition); andererseits wird darunter eine intellektuelle Übersetzung verstanden, bei der vor oder während des intellektuellen Übersetzungsprozesses ein Translation Memory und/ oder eine Terminologie-Bank verwendet werden. Unter ICAT wird eine spezielle Variante von CAT verstanden, bei der ein Nutzer ohne (hinreichende) Kenntnis der Zielsprache bei einer Übersetzung aus seiner Muttersprache so unterstützt wird, dass das zielsprachige Äquivalent relativ fehlerfrei ist.
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
  17. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.03
    0.031000717 = product of:
      0.062001433 = sum of:
        0.008602964 = weight(_text_:information in 1746) [ClassicSimilarity], result of:
          0.008602964 = score(doc=1746,freq=2.0), product of:
            0.07392587 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042111535 = queryNorm
            0.116372846 = fieldWeight in 1746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1746)
        0.036281876 = weight(_text_:und in 1746) [ClassicSimilarity], result of:
          0.036281876 = score(doc=1746,freq=14.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.38872904 = fieldWeight in 1746, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=1746)
        0.017116595 = product of:
          0.03423319 = sum of:
            0.03423319 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.03423319 = score(doc=1746,freq=2.0), product of:
                0.14746742 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042111535 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Im Rahmen dieser Arbeit wird eine Vorgehensweise entwickelt, die die Fixierung auf das Wort und die damit verbundenen Schwächen überwindet. Sie gestattet die Extraktion von Informationen anhand der repräsentierten Begriffe und bildet damit die Basis einer inhaltlichen Texterschließung. Die anschließende prototypische Realisierung dient dazu, die Konzeption zu überprüfen sowie ihre Möglichkeiten und Grenzen abzuschätzen und zu bewerten. Arbeiten zum Information Extraction widmen sich fast ausschließlich dem Englischen, wobei insbesondere im Bereich der Named Entities sehr gute Ergebnisse erzielt werden. Deutlich schlechter sehen die Resultate für weniger regelmäßige Sprachen wie beispielsweise das Deutsche aus. Aus diesem Grund sowie praktischen Erwägungen wie insbesondere der Vertrautheit des Autors damit, soll diese Sprache primär Gegenstand der Untersuchungen sein. Die Lösung von einer engen Termorientierung bei gleichzeitiger Betonung der repräsentierten Begriffe legt nahe, dass nicht nur die verwendeten Worte sekundär werden sondern auch die verwendete Sprache. Um den Rahmen dieser Arbeit nicht zu sprengen wird bei der Untersuchung dieses Punktes das Augenmerk vor allem auf die mit unterschiedlichen Sprachen verbundenen Schwierigkeiten und Besonderheiten gelegt.
    Content
    Dissertation an der Universität Trier - Fachbereich IV - zur Erlangung der Würde eines Doktors der Wirtschafts- und Sozialwissenschaften. Vgl.: http://ubt.opus.hbz-nrw.de/volltexte/2006/377/pdf/LorenzSaschaDiss.pdf.
    Date
    22. 3.2015 9:17:30
  18. Seelbach, H.E.: Von der Stichwortliste zum halbautomatisch kontrollierten Wortschatz (1977) 0.03
    0.030117115 = product of:
      0.09035134 = sum of:
        0.03958677 = weight(_text_:und in 8950) [ClassicSimilarity], result of:
          0.03958677 = score(doc=8950,freq=6.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.42413816 = fieldWeight in 8950, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=8950)
        0.050764572 = product of:
          0.101529144 = sum of:
            0.101529144 = weight(_text_:dokumentation in 8950) [ClassicSimilarity], result of:
              0.101529144 = score(doc=8950,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.516116 = fieldWeight in 8950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8950)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Bei Volltextsuch-Systemen kann jede Zeichenfolge aus dem Text als potentieller Suchbegriff übernommen werden. Dies führt bei großen Dokumentbeständen trotz Stoppwörtern zu umfangreichen und nicht mehr überschaubaren Wortlisten. Es wird ein Verfahren, bestehend aus Programmen und rationallem, intellektuellem Einsatz der Dokumentare vorgestellt, das solche Stichwortlisten in einen kontrollierten Wortschatz mit Vorzugsbenennungen, zusammengesetzten Ausdrücken und Thesaurusrelationen halbautomatisch überführt
    Source
    Nachrichten für Dokumentation. 28(1977), S.159-164
  19. Stock, M.; Stock, W.G.: Literaturnachweis- und Terminologiedatenbank : die Erfassung von Fachliteratur und Fachterminologie eines Fachgebiets in einer kombinierten Datenbank (1991) 0.03
    0.030117115 = product of:
      0.09035134 = sum of:
        0.03958677 = weight(_text_:und in 3411) [ClassicSimilarity], result of:
          0.03958677 = score(doc=3411,freq=6.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.42413816 = fieldWeight in 3411, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3411)
        0.050764572 = product of:
          0.101529144 = sum of:
            0.101529144 = weight(_text_:dokumentation in 3411) [ClassicSimilarity], result of:
              0.101529144 = score(doc=3411,freq=2.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.516116 = fieldWeight in 3411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3411)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In wissenschaftlichen Spezialgebieten kann über den Aufbau einer Literaturdatenbank gleichzeitig eine Terminologiedatenbank mit erstellt werden. Als Dokumentationsmethode eignet sich die Textwortmethode mit Übersetzungrelation. Mit dem Softwarepaket LBase aufgebaute Druckbildprogramme gestatten die Ausgabe von Bibliographien und Wörterbüchern
    Source
    Nachrichten für Dokumentation. 42(1991) H.1, S.35-41
  20. Stock, W.G.: Textwortmethode : Norbert Henrichs zum 65. (3) (2000) 0.03
    0.029700994 = product of:
      0.08910298 = sum of:
        0.03166942 = weight(_text_:und in 4891) [ClassicSimilarity], result of:
          0.03166942 = score(doc=4891,freq=6.0), product of:
            0.093334615 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.042111535 = queryNorm
            0.33931053 = fieldWeight in 4891, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4891)
        0.05743356 = product of:
          0.11486712 = sum of:
            0.11486712 = weight(_text_:dokumentation in 4891) [ClassicSimilarity], result of:
              0.11486712 = score(doc=4891,freq=4.0), product of:
                0.19671768 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.042111535 = queryNorm
                0.58391863 = fieldWeight in 4891, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4891)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Nur wenige Dokumentationsmethoden werden mit dem Namen ihrer Entwickler assoziiert. Ausnahmen sind Melvil Dewey (DDC), S.R. Ranganathan (Colon Classification) - und Norbert Henrichs. Seine Textwortmethode ermöglicht die Indexierung und das Retrieval von Literatur aus Fachgebieten, die keine allseits akzeptierte Fachterminologie vorweisen, also viele Sozial- und Geisteswissenschaften, vorneweg die Philosophie. Für den Einsatz in der elektronischen Philosophie-Dokumentation hat Henrichs in den späten sechziger Jahren die Textwortmethode entworfen. Er ist damit nicht nur einer der Pioniere der Anwendung der elektronischen Datenverarbeitung in der Informationspraxis, sondern auch der Pionier bei der Dokumentation terminologisch nicht starrer Fachsprachen

Years

Languages

Types

  • a 482
  • m 57
  • el 51
  • s 23
  • x 12
  • d 3
  • p 3
  • b 1
  • More… Less…

Subjects

Classifications