Search (22 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Computerlinguistik"
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.19
    0.18629304 = product of:
      0.37258607 = sum of:
        0.18114229 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18114229 = score(doc=563,freq=2.0), product of:
            0.3223069 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038016807 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.18114229 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18114229 = score(doc=563,freq=2.0), product of:
            0.3223069 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038016807 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.010301504 = product of:
          0.030904513 = sum of:
            0.030904513 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.030904513 = score(doc=563,freq=2.0), product of:
                0.13312837 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038016807 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5 = coord(3/6)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Engerer, V.: Informationswissenschaft und Linguistik. : kurze Geschichte eines fruchtbaren interdisziplinäaren Verhäaltnisses in drei Akten (2012) 0.04
    0.03740205 = product of:
      0.11220614 = sum of:
        0.09488111 = weight(_text_:geschichte in 3376) [ClassicSimilarity], result of:
          0.09488111 = score(doc=3376,freq=2.0), product of:
            0.18068628 = queryWeight, product of:
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.038016807 = queryNorm
            0.5251152 = fieldWeight in 3376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.078125 = fieldNorm(doc=3376)
        0.017325027 = product of:
          0.05197508 = sum of:
            0.05197508 = weight(_text_:29 in 3376) [ClassicSimilarity], result of:
              0.05197508 = score(doc=3376,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.38865322 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3376)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    19. 2.2017 13:29:08
  3. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.02
    0.022272889 = product of:
      0.06681866 = sum of:
        0.054691143 = weight(_text_:wissen in 3510) [ClassicSimilarity], result of:
          0.054691143 = score(doc=3510,freq=2.0), product of:
            0.1639626 = queryWeight, product of:
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.038016807 = queryNorm
            0.33355865 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.012127518 = product of:
          0.036382552 = sum of:
            0.036382552 = weight(_text_:29 in 3510) [ClassicSimilarity], result of:
              0.036382552 = score(doc=3510,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.27205724 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  4. Wolfangel, E.: Ich verstehe (2017) 0.01
    0.0130217 = product of:
      0.0781302 = sum of:
        0.0781302 = weight(_text_:wissen in 3976) [ClassicSimilarity], result of:
          0.0781302 = score(doc=3976,freq=2.0), product of:
            0.1639626 = queryWeight, product of:
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.038016807 = queryNorm
            0.47651234 = fieldWeight in 3976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.078125 = fieldNorm(doc=3976)
      0.16666667 = coord(1/6)
    
    Series
    Wissen: Technik, Forschung, Umwelt, Mensch
  5. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.012706584 = product of:
      0.038119752 = sum of:
        0.031252082 = weight(_text_:wissen in 4217) [ClassicSimilarity], result of:
          0.031252082 = score(doc=4217,freq=2.0), product of:
            0.1639626 = queryWeight, product of:
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.038016807 = queryNorm
            0.19060494 = fieldWeight in 4217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.03125 = fieldNorm(doc=4217)
        0.0068676695 = product of:
          0.020603009 = sum of:
            0.020603009 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.020603009 = score(doc=4217,freq=2.0), product of:
                0.13312837 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038016807 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Jetzt scheint es allmählich ans Eingemachte zu gehen. Ein von der chinesischen Alibaba-Gruppe entwickelte KI-Programm konnte erstmals Menschen in der Beantwortung von Fragen und dem Verständnis von Text schlagen. Die chinesische Regierung will das Land führend in der Entwicklung von Künstlicher Intelligenz machen und hat dafür eine nationale Strategie aufgestellt. Dazu ernannte das Ministerium für Wissenschaft und Technik die Internetkonzerne Baidu, Alibaba und Tencent sowie iFlyTek zum ersten nationalen Team für die Entwicklung der KI-Technik der nächsten Generation. Baidu ist zuständig für die Entwicklung autonomer Fahrzeuge, Alibaba für die Entwicklung von Clouds für "city brains" (Smart Cities sollen sich an ihre Einwohner und ihre Umgebung anpassen), Tencent für die Enwicklung von Computervision für medizinische Anwendungen und iFlyTec für "Stimmenintelligenz". Die vier Konzerne sollen offene Plattformen herstellen, die auch andere Firmen und Start-ups verwenden können. Überdies wird bei Peking für eine Milliarde US-Dollar ein Technologiepark für die Entwicklung von KI gebaut. Dabei geht es selbstverständlich nicht nur um zivile Anwendungen, sondern auch militärische. Noch gibt es in den USA mehr KI-Firmen, aber China liegt bereits an zweiter Stelle. Das Pentagon ist beunruhigt. Offenbar kommt China rasch vorwärts. Ende 2017 stellte die KI-Firma iFlyTek, die zunächst auf Stimmerkennung und digitale Assistenten spezialisiert war, einen Roboter vor, der den schriftlichen Test der nationalen Medizinprüfung erfolgreich bestanden hatte. Der Roboter war nicht nur mit immensem Wissen aus 53 medizinischen Lehrbüchern, 2 Millionen medizinischen Aufzeichnungen und 400.000 medizinischen Texten und Berichten gefüttert worden, er soll von Medizinexperten klinische Erfahrungen und Falldiagnosen übernommen haben. Eingesetzt werden soll er, in China herrscht vor allem auf dem Land, Ärztemangel, als Helfer, der mit der automatischen Auswertung von Patientendaten eine erste Diagnose erstellt und ansonsten Ärzten mit Vorschlägen zur Seite stehen.
    Date
    22. 1.2018 11:32:44
  6. Becks, D.; Schulz, J.M.: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente (2010) 0.01
    0.010417361 = product of:
      0.062504165 = sum of:
        0.062504165 = weight(_text_:wissen in 4661) [ClassicSimilarity], result of:
          0.062504165 = score(doc=4661,freq=2.0), product of:
            0.1639626 = queryWeight, product of:
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.038016807 = queryNorm
            0.38120988 = fieldWeight in 4661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.0625 = fieldNorm(doc=4661)
      0.16666667 = coord(1/6)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  7. Engerer, V.: Exploring interdisciplinary relationships between linguistics and information retrieval from the 1960s to today (2017) 0.01
    0.009488111 = product of:
      0.056928664 = sum of:
        0.056928664 = weight(_text_:geschichte in 3434) [ClassicSimilarity], result of:
          0.056928664 = score(doc=3434,freq=2.0), product of:
            0.18068628 = queryWeight, product of:
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.038016807 = queryNorm
            0.3150691 = fieldWeight in 3434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.046875 = fieldNorm(doc=3434)
      0.16666667 = coord(1/6)
    
    Theme
    Geschichte der Sacherschließung
  8. Mengel, T.: Wie viel Terminologiearbeit steckt in der Übersetzung der Dewey-Dezimalklassifikation? (2019) 0.01
    0.007813022 = product of:
      0.046878126 = sum of:
        0.046878126 = weight(_text_:wissen in 5603) [ClassicSimilarity], result of:
          0.046878126 = score(doc=5603,freq=2.0), product of:
            0.1639626 = queryWeight, product of:
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.038016807 = queryNorm
            0.28590742 = fieldWeight in 5603, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3128977 = idf(docFreq=1609, maxDocs=44218)
              0.046875 = fieldNorm(doc=5603)
      0.16666667 = coord(1/6)
    
    Abstract
    Bibliotheken weltweit setzen die Dewey-Dezimalklassifikation (DDC) als Aufstellungssystematik und/oder zur Katalogsuche ein. Es existieren Übersetzungen der DDC in über 30 Sprachen. Als ein umfassendes System zur Ordnung von Wissen bestehend aus numerischen Notationen und sprachlichen Klasseninhalten bietet die DDC dem Terminologen bzw. der Terminologin ein weites Arbeits- und Forschungsfeld. Aber wie spielen Terminologiearbeit und Übersetzung zusammen, wenn, wie in diesem Fall, die Terminologie selbst das Übersetzungsobjekt ist? Der Aufsatz kann nicht alle Themen erschöpfend behandeln, aber er präsentiert Merkmale der DDC erstmals aus der Perspektive der DDC-Übersetzungsarbeit, und er wirft die Frage auf, ob dem Aspekt der Terminologiearbeit in der DDC-Übersetzung bislang tatsächlich genügend Aufmerksamkeit geschenkt wurde.
  9. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0047440557 = product of:
      0.028464332 = sum of:
        0.028464332 = weight(_text_:geschichte in 331) [ClassicSimilarity], result of:
          0.028464332 = score(doc=331,freq=2.0), product of:
            0.18068628 = queryWeight, product of:
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.038016807 = queryNorm
            0.15753455 = fieldWeight in 331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.0234375 = fieldNorm(doc=331)
      0.16666667 = coord(1/6)
    
    Content
    Die statistische Textanalyse funktioniert unabhängig von der Sprache Während sowohl Buchstaben als auch Wörter Langzeit-korreliert sind, kommen Buchstaben nur selten an bestimmten Stellen eines Textes gehäuft vor. "Ein Buchstabe ist eben nur sehr selten so eng mit einem Thema verknüpft wie das Wort zu dem er einen Teil beiträgt. Buchstaben sind sozusagen flexibler einsetzbar", sagt Altmann. Ein "a" beispielsweise kann zu einer ganzen Reihe von Wörtern beitragen, die nicht mit demselben Thema in Verbindung stehen. Mit Hilfe der statistischen Analyse von Texten ist es den Forschern gelungen, die prägenden Wörter eines Textes auf einfache Weise zu ermitteln. "Dabei ist es vollkommen egal, in welcher Sprache ein Text geschrieben ist. Es geht nur noch um die Geschichte und nicht um sprachspezifische Regeln", sagt Altmann. Die Ergebnisse könnten zukünftig zur Verbesserung von Internetsuchmaschinen beitragen, aber auch bei Textanalysen und der Suche nach Plagiaten helfen."
  10. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.00
    0.0028875046 = product of:
      0.017325027 = sum of:
        0.017325027 = product of:
          0.05197508 = sum of:
            0.05197508 = weight(_text_:29 in 2868) [ClassicSimilarity], result of:
              0.05197508 = score(doc=2868,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.38865322 = fieldWeight in 2868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 4.2016 12:04:50
  11. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.00
    0.002450129 = product of:
      0.014700773 = sum of:
        0.014700773 = product of:
          0.04410232 = sum of:
            0.04410232 = weight(_text_:29 in 2738) [ClassicSimilarity], result of:
              0.04410232 = score(doc=2738,freq=4.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.3297832 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 1.2016 18:29:51
  12. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.00
    0.0022892233 = product of:
      0.013735339 = sum of:
        0.013735339 = product of:
          0.041206017 = sum of:
            0.041206017 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.041206017 = score(doc=1490,freq=2.0), product of:
                0.13312837 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038016807 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2015 9:30:24
  13. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.00
    0.002021253 = product of:
      0.012127518 = sum of:
        0.012127518 = product of:
          0.036382552 = sum of:
            0.036382552 = weight(_text_:29 in 156) [ClassicSimilarity], result of:
              0.036382552 = score(doc=156,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.27205724 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 5.2012 10:17:08
  14. Rayson, P.; Piao, S.; Sharoff, S.; Evert, S.; Moiron, B.V.: Multiword expressions : hard going or plain sailing? (2015) 0.00
    0.002021253 = product of:
      0.012127518 = sum of:
        0.012127518 = product of:
          0.036382552 = sum of:
            0.036382552 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
              0.036382552 = score(doc=2918,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.27205724 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 4.2016 12:05:56
  15. Schöneberg, U.; Sperber, W.: POS tagging and its applications for mathematics (2014) 0.00
    0.0017325026 = product of:
      0.010395016 = sum of:
        0.010395016 = product of:
          0.031185046 = sum of:
            0.031185046 = weight(_text_:29 in 1748) [ClassicSimilarity], result of:
              0.031185046 = score(doc=1748,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.23319192 = fieldWeight in 1748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1748)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2015 19:34:37
  16. Snajder, J.; Almic, P.: Modeling semantic compositionality of Croatian multiword expressions (2015) 0.00
    0.0017325026 = product of:
      0.010395016 = sum of:
        0.010395016 = product of:
          0.031185046 = sum of:
            0.031185046 = weight(_text_:29 in 2920) [ClassicSimilarity], result of:
              0.031185046 = score(doc=2920,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.23319192 = fieldWeight in 2920, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2920)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 4.2016 12:42:17
  17. Geißler, S.: Maschinelles Lernen und NLP : Reif für die industrielle Anwendung! (2019) 0.00
    0.0017325026 = product of:
      0.010395016 = sum of:
        0.010395016 = product of:
          0.031185046 = sum of:
            0.031185046 = weight(_text_:29 in 3547) [ClassicSimilarity], result of:
              0.031185046 = score(doc=3547,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.23319192 = fieldWeight in 3547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3547)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    2. 9.2019 19:29:24
  18. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.00
    0.0017169174 = product of:
      0.010301504 = sum of:
        0.010301504 = product of:
          0.030904513 = sum of:
            0.030904513 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.030904513 = score(doc=1848,freq=2.0), product of:
                0.13312837 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038016807 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  19. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.0014437523 = product of:
      0.008662513 = sum of:
        0.008662513 = product of:
          0.02598754 = sum of:
            0.02598754 = weight(_text_:29 in 3682) [ClassicSimilarity], result of:
              0.02598754 = score(doc=3682,freq=2.0), product of:
                0.13373125 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038016807 = queryNorm
                0.19432661 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    16.11.2017 14:00:29
  20. Fóris, A.: Network theory and terminology (2013) 0.00
    0.0014307647 = product of:
      0.008584588 = sum of:
        0.008584588 = product of:
          0.025753763 = sum of:
            0.025753763 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.025753763 = score(doc=1365,freq=2.0), product of:
                0.13312837 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038016807 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    2. 9.2014 21:22:48