Search (52 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[2010 TO 2020}
  1. Ludwig, B.; Reischer, J.: Informationslinguistik in Regensburg (2012) 0.04
    0.03841573 = product of:
      0.096039325 = sum of:
        0.051108688 = weight(_text_:j in 555) [ClassicSimilarity], result of:
          0.051108688 = score(doc=555,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.39718705 = fieldWeight in 555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=555)
        0.044930637 = weight(_text_:b in 555) [ClassicSimilarity], result of:
          0.044930637 = score(doc=555,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.31315655 = fieldWeight in 555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0625 = fieldNorm(doc=555)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2012.63.issue-5/iwp-2012-0065/iwp-2012-0065.xml?format=INT.
  2. Sprachtechnologie : ein Überblick (2012) 0.03
    0.032059874 = product of:
      0.08014969 = sum of:
        0.023986388 = weight(_text_:u in 1750) [ClassicSimilarity], result of:
          0.023986388 = score(doc=1750,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 1750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1750)
        0.0561633 = weight(_text_:b in 1750) [ClassicSimilarity], result of:
          0.0561633 = score(doc=1750,freq=8.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.3914457 = fieldWeight in 1750, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1750)
      0.4 = coord(2/5)
    
    Abstract
    Seit mehr als einem halben Jahrhundert existieren ernsthafte und ernst zu nehmende Versuche, menschliche Sprache maschinell zu verarbeiten. Maschinelle Übersetzung oder "natürliche" Dialoge mit Computern gehören zu den ersten Ideen, die den Bereich der späteren Computerlinguistik oder Sprachtechnologie abgesteckt und deren Vorhaben geleitet haben. Heute ist dieser auch maschinelle Sprachverarbeitung (natural language processing, NLP) genannte Bereich stark ausdiversifiziert: Durch die rapide Entwicklung der Informatik ist vieles vorher Unvorstellbare Realität (z. B. automatische Telefonauskunft), einiges früher Unmögliche immerhin möglich geworden (z. B. Handhelds mit Sprachein- und -ausgabe als digitale persönliche (Informations-)Assistenten). Es gibt verschiedene Anwendungen der Computerlinguistik, von denen einige den Sprung in die kommerzielle Nutzung geschafft haben (z. B. Diktiersysteme, Textklassifikation, maschinelle Übersetzung). Immer noch wird an natürlichsprachlichen Systemen (natural language systems, NLS) verschiedenster Funktionalität (z. B. zur Beantwortung beliebiger Fragen oder zur Generierung komplexer Texte) intensiv geforscht, auch wenn die hoch gesteckten Ziele von einst längst nicht erreicht sind (und deshalb entsprechend "heruntergefahren" wurden). Wo die maschinelle Sprachverarbeitung heute steht, ist allerdings angesichts der vielfältigen Aktivitäten in der Computerlinguistik und Sprachtechnologie weder offensichtlich noch leicht in Erfahrung zu bringen (für Studierende des Fachs und erst recht für Laien). Ein Ziel dieses Buches ist, es, die aktuelle Literaturlage in dieser Hinsicht zu verbessern, indem spezifisch systembezogene Aspekte der Computerlinguistik als Überblick über die Sprachtechnologie zusammengetragen werden.
    Theme
    Grundlagen u. Einführungen: Allgemeine Literatur
  3. Endres-Niggemeyer, B.: Thinkie: Lautes Denken mit Spracherkennung (mobil) (2013) 0.03
    0.029904246 = product of:
      0.074760616 = sum of:
        0.027104476 = weight(_text_:j in 1145) [ClassicSimilarity], result of:
          0.027104476 = score(doc=1145,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 1145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=1145)
        0.04765614 = weight(_text_:b in 1145) [ClassicSimilarity], result of:
          0.04765614 = score(doc=1145,freq=4.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.3321527 = fieldWeight in 1145, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=1145)
      0.4 = coord(2/5)
    
    Abstract
    Lautes Denken ist eine bewährte Methode zum Erforschen kognitiver Prozesse. Es wird in vielen Disziplinen benutzt, z. B. um aufzudecken, welche Erfahrungen Benutzer bei der Interaktion mit Computerschnittstellen machen. Nach einer kurzen Erklärung des Lauten Denkens wird die App Thinkie vorgestellt. Thinkie ist eine mobile Lösung für das Laute Denken auf iPhone und iPad. Die Testperson nimmt auf dem iPhone den Ton auf. Die Spracherkennungssoftware Siri (http://www.apple.com/de/ios/siri/) transkribiert ihn. Parallel wird auf dem iPad oder einem anderen Gerät gefilmt. Auf dem iPad kann man - mit Video im Blick - das Transkript aufarbeiten und interpretieren. Die Textdateien transportiert Thinkie über eine Cloud-Kollektion, die Filme werden mit iTunes übertragen. Thinkie ist noch nicht tauglich für den praktischen Gebrauch. Noch sind die Sequenzen zu kurz, die Siri verarbeiten kann. Das wird sich ändern.
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2013.64.issue-6/iwp-2013-004/iwp-2013-004.xml?format=INT.
  4. Becks, D.; Schulz, J.M.: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente (2010) 0.03
    0.029807007 = product of:
      0.07451752 = sum of:
        0.036139302 = weight(_text_:j in 4661) [ClassicSimilarity], result of:
          0.036139302 = score(doc=4661,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 4661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=4661)
        0.03837822 = weight(_text_:u in 4661) [ClassicSimilarity], result of:
          0.03837822 = score(doc=4661,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.28942272 = fieldWeight in 4661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4661)
      0.4 = coord(2/5)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  5. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.02
    0.02432098 = product of:
      0.060802452 = sum of:
        0.027104476 = weight(_text_:j in 2697) [ClassicSimilarity], result of:
          0.027104476 = score(doc=2697,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 2697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
        0.033697978 = weight(_text_:b in 2697) [ClassicSimilarity], result of:
          0.033697978 = score(doc=2697,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.23486741 = fieldWeight in 2697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
      0.4 = coord(2/5)
    
  6. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.02
    0.02432098 = product of:
      0.060802452 = sum of:
        0.027104476 = weight(_text_:j in 5219) [ClassicSimilarity], result of:
          0.027104476 = score(doc=5219,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 5219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=5219)
        0.033697978 = weight(_text_:b in 5219) [ClassicSimilarity], result of:
          0.033697978 = score(doc=5219,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.23486741 = fieldWeight in 5219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=5219)
      0.4 = coord(2/5)
    
    Abstract
    Publishing articles in high-impact English journals is difficult for scholars around the world, especially for non-native English-speaking scholars (NNESs), most of whom struggle with proficiency in English. To uncover the differences in English scientific writing between native English-speaking scholars (NESs) and NNESs, we collected a large-scale data set containing more than 150,000 full-text articles published in PLoS between 2006 and 2015. We divided these articles into three groups according to the ethnic backgrounds of the first and corresponding authors, obtained by Ethnea, and examined the scientific writing styles in English from a two-fold perspective of linguistic complexity: (a) syntactic complexity, including measurements of sentence length and sentence complexity; and (b) lexical complexity, including measurements of lexical diversity, lexical density, and lexical sophistication. The observations suggest marginal differences between groups in syntactical and lexical complexity.
  7. Li, N.; Sun, J.: Improving Chinese term association from the linguistic perspective (2017) 0.02
    0.022355257 = product of:
      0.05588814 = sum of:
        0.027104476 = weight(_text_:j in 3381) [ClassicSimilarity], result of:
          0.027104476 = score(doc=3381,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 3381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=3381)
        0.028783662 = weight(_text_:u in 3381) [ClassicSimilarity], result of:
          0.028783662 = score(doc=3381,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.21706703 = fieldWeight in 3381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=3381)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  8. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.02
    0.020827217 = product of:
      0.05206804 = sum of:
        0.023986388 = weight(_text_:u in 1338) [ClassicSimilarity], result of:
          0.023986388 = score(doc=1338,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
        0.02808165 = weight(_text_:b in 1338) [ClassicSimilarity], result of:
          0.02808165 = score(doc=1338,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.19572285 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  9. Malo, P.; Sinha, A.; Korhonen, P.; Wallenius, J.; Takala, P.: Good debt or bad debt : detecting semantic orientations in economic texts (2014) 0.02
    0.020267485 = product of:
      0.050668713 = sum of:
        0.022587063 = weight(_text_:j in 1226) [ClassicSimilarity], result of:
          0.022587063 = score(doc=1226,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 1226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1226)
        0.02808165 = weight(_text_:b in 1226) [ClassicSimilarity], result of:
          0.02808165 = score(doc=1226,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.19572285 = fieldWeight in 1226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1226)
      0.4 = coord(2/5)
    
    Abstract
    The use of robo-readers to analyze news texts is an emerging technology trend in computational finance. Recent research has developed sophisticated financial polarity lexicons for investigating how financial sentiments relate to future company performance. However, based on experience from fields that commonly analyze sentiment, it is well known that the overall semantic orientation of a sentence may differ from that of individual words. This article investigates how semantic orientations can be better detected in financial and economic news by accommodating the overall phrase-structure information and domain-specific use of language. Our three main contributions are the following: (a) a human-annotated finance phrase bank that can be used for training and evaluating alternative models; (b) a technique to enhance financial lexicons with attributes that help to identify expected direction of events that affect sentiment; and (c) a linearized phrase-structure model for detecting contextual semantic orientations in economic texts. The relevance of the newly added lexicon features and the benefit of using the proposed learning algorithm are demonstrated in a comparative study against general sentiment models as well as the popular word frequency models used in recent financial studies. The proposed framework is parsimonious and avoids the explosion in feature space caused by the use of conventional n-gram features.
  10. Reyes Ayala, B.; Knudson, R.; Chen, J.; Cao, G.; Wang, X.: Metadata records machine translation combining multi-engine outputs with limited parallel data (2018) 0.02
    0.020267485 = product of:
      0.050668713 = sum of:
        0.022587063 = weight(_text_:j in 4010) [ClassicSimilarity], result of:
          0.022587063 = score(doc=4010,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 4010, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4010)
        0.02808165 = weight(_text_:b in 4010) [ClassicSimilarity], result of:
          0.02808165 = score(doc=4010,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.19572285 = fieldWeight in 4010, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4010)
      0.4 = coord(2/5)
    
  11. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.017425805 = product of:
      0.043564513 = sum of:
        0.027104476 = weight(_text_:j in 1848) [ClassicSimilarity], result of:
          0.027104476 = score(doc=1848,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 1848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=1848)
        0.016460039 = product of:
          0.032920077 = sum of:
            0.032920077 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.032920077 = score(doc=1848,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  12. Hahn, U.: Methodische Grundlagen der Informationslinguistik (2013) 0.01
    0.013568749 = product of:
      0.06784374 = sum of:
        0.06784374 = weight(_text_:u in 719) [ClassicSimilarity], result of:
          0.06784374 = score(doc=719,freq=4.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.5116319 = fieldWeight in 719, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=719)
      0.2 = coord(1/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  13. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.01
    0.012777172 = product of:
      0.06388586 = sum of:
        0.06388586 = weight(_text_:j in 2868) [ClassicSimilarity], result of:
          0.06388586 = score(doc=2868,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.4964838 = fieldWeight in 2868, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.078125 = fieldNorm(doc=2868)
      0.2 = coord(1/5)
    
    Content
    Folien einer Präsentation anlässlich COST Action IC1207 PARSEME Meeting, Warsaw, September 16, 2013. Vgl. den Beitrag: Snajder, J., P. Almic: Modeling semantic compositionality of Croatian multiword expressions. In: Informatica. 39(2015) H.3, S.301-309.
  14. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.01
    0.01123266 = product of:
      0.0561633 = sum of:
        0.0561633 = weight(_text_:b in 2027) [ClassicSimilarity], result of:
          0.0561633 = score(doc=2027,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.3914457 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.2 = coord(1/5)
    
  15. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.009594555 = product of:
      0.047972776 = sum of:
        0.047972776 = weight(_text_:u in 2995) [ClassicSimilarity], result of:
          0.047972776 = score(doc=2995,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.3617784 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.2 = coord(1/5)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  16. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.01
    0.007862861 = product of:
      0.039314307 = sum of:
        0.039314307 = weight(_text_:b in 2764) [ClassicSimilarity], result of:
          0.039314307 = score(doc=2764,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.27401197 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2764)
      0.2 = coord(1/5)
    
  17. Liu, P.J.; Saleh, M.; Pot, E.; Goodrich, B.; Sepassi, R.; Kaiser, L.; Shazeer, N.: Generating Wikipedia by summarizing long sequences (2018) 0.01
    0.007862861 = product of:
      0.039314307 = sum of:
        0.039314307 = weight(_text_:b in 773) [ClassicSimilarity], result of:
          0.039314307 = score(doc=773,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.27401197 = fieldWeight in 773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=773)
      0.2 = coord(1/5)
    
  18. Heid, U.: Computerlinguistik zwischen Informationswissenschaft und multilingualer Kommunikation (2010) 0.01
    0.007675644 = product of:
      0.03837822 = sum of:
        0.03837822 = weight(_text_:u in 4018) [ClassicSimilarity], result of:
          0.03837822 = score(doc=4018,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.28942272 = fieldWeight in 4018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=4018)
      0.2 = coord(1/5)
    
  19. Soo, J.; Frieder, O.: On searching misspelled collections (2015) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 1862) [ClassicSimilarity], result of:
          0.036139302 = score(doc=1862,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 1862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=1862)
      0.2 = coord(1/5)
    
  20. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.01
    0.0067395954 = product of:
      0.033697978 = sum of:
        0.033697978 = weight(_text_:b in 3457) [ClassicSimilarity], result of:
          0.033697978 = score(doc=3457,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.23486741 = fieldWeight in 3457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=3457)
      0.2 = coord(1/5)
    

Languages

  • e 37
  • d 14

Types

  • a 41
  • el 8
  • m 4
  • x 3
  • s 1
  • More… Less…