Search (92 results, page 1 of 5)

  • × theme_ss:"Computerlinguistik"
  1. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.06
    0.06480524 = product of:
      0.12961048 = sum of:
        0.12961048 = product of:
          0.19441572 = sum of:
            0.14706495 = weight(_text_:d.h in 1490) [ClassicSimilarity], result of:
              0.14706495 = score(doc=1490,freq=2.0), product of:
                0.26960507 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5454829 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
            0.04735076 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.04735076 = score(doc=1490,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Morphy ist ein frei verfügbares Softwarepaket für die morphologische Analyse und Synthese und die kontextsensitive Wortartenbestimmung des Deutschen. Die Verwendung der Software unterliegt keinen Beschränkungen. Da die Weiterentwicklung eingestellt worden ist, verwenden Sie Morphy as is, d.h. auf eigenes Risiko, ohne jegliche Haftung und Gewährleistung und vor allem ohne Support. Morphy ist nur für die Windows-Plattform verfügbar und nur auf Standalone-PCs lauffähig.
    Date
    22. 3.2015 9:30:24
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.06387635 = sum of:
      0.052038666 = product of:
        0.20815466 = sum of:
          0.20815466 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.20815466 = score(doc=562,freq=2.0), product of:
              0.3703701 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.043685965 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.01183769 = product of:
        0.03551307 = sum of:
          0.03551307 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03551307 = score(doc=562,freq=2.0), product of:
              0.1529808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043685965 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.04
    0.036117177 = product of:
      0.072234355 = sum of:
        0.072234355 = product of:
          0.10835153 = sum of:
            0.061000764 = weight(_text_:l in 8521) [ClassicSimilarity], result of:
              0.061000764 = score(doc=8521,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.35131297 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
            0.04735076 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.04735076 = score(doc=8521,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026019333 = product of:
      0.052038666 = sum of:
        0.052038666 = product of:
          0.20815466 = sum of:
            0.20815466 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20815466 = score(doc=862,freq=2.0), product of:
                0.3703701 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043685965 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.02
    0.022573236 = product of:
      0.045146473 = sum of:
        0.045146473 = product of:
          0.067719705 = sum of:
            0.038125478 = weight(_text_:l in 1171) [ClassicSimilarity], result of:
              0.038125478 = score(doc=1171,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.2195706 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
            0.029594226 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029594226 = score(doc=1171,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    23.11.2023 19:07:22
  6. Navarretta, C.; Pedersen, B.S.; Hansen, D.H.: Language technology in knowledge-organization systems (2006) 0.02
    0.01838312 = product of:
      0.03676624 = sum of:
        0.03676624 = product of:
          0.11029871 = sum of:
            0.11029871 = weight(_text_:d.h in 5706) [ClassicSimilarity], result of:
              0.11029871 = score(doc=5706,freq=2.0), product of:
                0.26960507 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.043685965 = queryNorm
                0.40911216 = fieldWeight in 5706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5706)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  7. Guthrie, L.; Pustejovsky, J.; Wilks, Y.; Slator, B.M.: ¬The role of lexicons in natural language processing (1996) 0.02
    0.01779189 = product of:
      0.03558378 = sum of:
        0.03558378 = product of:
          0.10675134 = sum of:
            0.10675134 = weight(_text_:l in 6825) [ClassicSimilarity], result of:
              0.10675134 = score(doc=6825,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.6147977 = fieldWeight in 6825, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6825)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Warner, A.J.: Natural language processing (1987) 0.02
    0.015783587 = product of:
      0.031567175 = sum of:
        0.031567175 = product of:
          0.09470152 = sum of:
            0.09470152 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09470152 = score(doc=337,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Larroche-Boutet, V.; Pöhl, K.: ¬Das Nominalsyntagna : über die Nutzbarmachung eines logico-semantischen Konzeptes für dokumentarische Fragestellungen (1993) 0.02
    0.015319265 = product of:
      0.03063853 = sum of:
        0.03063853 = product of:
          0.09191559 = sum of:
            0.09191559 = weight(_text_:d.h in 5282) [ClassicSimilarity], result of:
              0.09191559 = score(doc=5282,freq=2.0), product of:
                0.26960507 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.043685965 = queryNorm
                0.3409268 = fieldWeight in 5282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5282)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Am Anfang nachfolgender Ausführungen werden die für die Indexierung großer textmengen notwendigen strategischen Entscheidungen aufgezeigt: es müssen sowohl das Indexierungsverfahren (menschliche oder automatische Indexierung) als auch die Indexierungssparche (freie, kontrollierte oder natürliche Sprache) ausgewählt werden. Hierbei hat sich die Forschungsgruppe SYDO-LYON für natürlichsprachige automatische Vollindexierung entschieden. Auf der Grundlage der Unterscheidung zwischen prädikativen und referentiellen Textteilen wird d as Nominalsyntagma als kleinste referentielle Texteinheit definiert, dann das für die Konstituierung eines Nominalsyntagmas entscheidende Phänomen der Aktualisierung erläutert und schließlich auf die morphologischen Mittel zur Erkennung des Nominalsyntagmas hingewiesen. Alle Nominalsyntagma eines Textes werden als dessen potentielle Deskriptoren extrahiert, und Hilfsmittel für die Benutzer einer mit diesem Indexierungsverfahren arbeitenden Datenbank werden vorgestellt. Außerdem wird der begriff der Anapher (d.h. die Wiederaufnahme von Nominalsyntagmen durch Pronomen) kurz definiert, ihre Anwendung als Mittel zur Gewichtung des Deskriptorterme (durch Zählung ihrer Häufigkeit im text) aufgezeigt und morphologische uns syntaktische Regeln zur automatischen Bestimmung des von einem anaphorischen Pronomen aufgenommenen Nominalsyntagmas aufgestellt. Bevor abschließend Ziele und Grenzen der Arbeit diskutiert werden, wird noch auf einen Unterschied zwischen Nominalsyntagma und Deskriptorterm hingewiesen: das Nonimalsyntagma verweist auf ein Objekt, das ein Einzelobjekt oder eine Klasse sein kann, der Deskriptorterm verweist immer auf eine Klasse
  10. Witschel, H.F.: Text, Wörter, Morpheme : Möglichkeiten einer automatischen Terminologie-Extraktion (2004) 0.02
    0.015319265 = product of:
      0.03063853 = sum of:
        0.03063853 = product of:
          0.09191559 = sum of:
            0.09191559 = weight(_text_:d.h in 126) [ClassicSimilarity], result of:
              0.09191559 = score(doc=126,freq=2.0), product of:
                0.26960507 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.043685965 = queryNorm
                0.3409268 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=126)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit einem Teilgebiet des TextMining, versucht also Information (in diesem Fall Fachterminologie) aus natürlichsprachlichem Text zu extrahieren. Die der Arbeit zugrundeliegende These besagt, daß in vielen Gebieten des Text Mining die Kombination verschiedener Methoden sinnvoll sein kann, um dem Facettenreichtum natürlicher Sprache gerecht zu werden. Die bei der Terminologie-Extraktion angewandten Methoden sind statistischer und linguistischer (bzw. musterbasierter) Natur. Um sie herzuleiten, wurden einige Eigenschaften von Fachtermini herausgearbeitet, die für deren Extraktion relevant sind. So läßt sich z.B. die Tatsache, daß viele Fachbegriffe Nominalphrasen einer bestimmten Form sind, direkt für eine Suche nach gewissen POS-Mustern ausnützen, die Verteilung von Termen in Fachtexten führte zu einem statistischen Ansatz - der Differenzanalyse. Zusammen mit einigen weiteren wurden diese Ansätze in ein Verfahren integriert, welches in der Lage ist, aus dem Feedback eines Anwenders zu lernen und in mehreren Schritten die Suche nach Terminologie zu verfeinern. Dabei wurden mehrere Parameter des Verfahrens veränderlich belassen, d.h. der Anwender kann sie beliebig anpassen. Bei der Untersuchung der Ergebnisse anhand von zwei Fachtexten aus unterschiedlichen Domänen wurde deutlich, daß sich zwar die verschiedenen Verfahren gut ergänzen, daß aber die optimalen Werte der veränderbaren Parameter, ja selbst die Auswahl der angewendeten Verfahren text- und domänenabhängig sind.
  11. Alonge, A.; Calzolari, N.; Vossen, P.; Bloksma, L.; Castellon, I.; Marti, M.A.; Peters, W.: ¬The linguistic design of the EuroWordNet database (1998) 0.02
    0.01525019 = product of:
      0.03050038 = sum of:
        0.03050038 = product of:
          0.09150114 = sum of:
            0.09150114 = weight(_text_:l in 6440) [ClassicSimilarity], result of:
              0.09150114 = score(doc=6440,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.52696943 = fieldWeight in 6440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6440)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.02
    0.01525019 = product of:
      0.03050038 = sum of:
        0.03050038 = product of:
          0.09150114 = sum of:
            0.09150114 = weight(_text_:l in 6441) [ClassicSimilarity], result of:
              0.09150114 = score(doc=6441,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.52696943 = fieldWeight in 6441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6441)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Vossen, P.; Bloksma, L.; Alonge, A.; Marinai, E.; Peters, C.; Catellon, I.; Marti, M.A.; Rigau, G.: Compatibility in interpretation of relations in EuroWordNet (1998) 0.02
    0.01525019 = product of:
      0.03050038 = sum of:
        0.03050038 = product of:
          0.09150114 = sum of:
            0.09150114 = weight(_text_:l in 6442) [ClassicSimilarity], result of:
              0.09150114 = score(doc=6442,freq=2.0), product of:
                0.17363653 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.043685965 = queryNorm
                0.52696943 = fieldWeight in 6442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6442)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08286383 = score(doc=3164,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  15. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08286383 = score(doc=4506,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  16. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08286383 = score(doc=6672,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  17. New tools for human translators (1997) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08286383 = score(doc=1179,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  18. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08286383 = score(doc=3117,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  19. ¬Der Student aus dem Computer (2023) 0.01
    0.013810638 = product of:
      0.027621277 = sum of:
        0.027621277 = product of:
          0.08286383 = sum of:
            0.08286383 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08286383 = score(doc=1079,freq=2.0), product of:
                0.1529808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043685965 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  20. Ou, S.; Khoo, C.; Goh, D.H.; Heng, H.-Y.: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization (2004) 0.01
    0.012255413 = product of:
      0.024510827 = sum of:
        0.024510827 = product of:
          0.07353248 = sum of:
            0.07353248 = weight(_text_:d.h in 2676) [ClassicSimilarity], result of:
              0.07353248 = score(doc=2676,freq=2.0), product of:
                0.26960507 = queryWeight, product of:
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.043685965 = queryNorm
                0.27274144 = fieldWeight in 2676, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.1714344 = idf(docFreq=250, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2676)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Languages

  • e 69
  • d 22

Types

  • a 71
  • el 9
  • m 9
  • s 7
  • x 5
  • p 2
  • d 1
  • More… Less…

Classifications