Search (75 results, page 1 of 4)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Pany, T.: Konfusion in der Medienrepublik : Der Überraschungseffekt der Youtuber (2019) 0.08
    0.08095287 = product of:
      0.16190574 = sum of:
        0.16190574 = sum of:
          0.11367553 = weight(_text_:90 in 5244) [ClassicSimilarity], result of:
            0.11367553 = score(doc=5244,freq=2.0), product of:
              0.2733978 = queryWeight, product of:
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.050854117 = queryNorm
              0.415788 = fieldWeight in 5244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5244)
          0.048230216 = weight(_text_:22 in 5244) [ClassicSimilarity], result of:
            0.048230216 = score(doc=5244,freq=2.0), product of:
              0.17808245 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050854117 = queryNorm
              0.2708308 = fieldWeight in 5244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5244)
      0.5 = coord(1/2)
    
    Abstract
    Vor der EU-Wahl veröffentlichen 90 "Webstars" eine Wahlempfehlung: "Wählt nicht die CDU/CSU, wählt nicht die SPD und schon gar nicht die AfD". Die Reaktionen sind der eigentliche Aufreger. Bezug zu: https://youtu.be/4Y1lZQsyuSQ und https://youtu.be/Xpg84NjCr9c.
    Content
    Vgl. auch: Dörner, S.:"CDU-Zerstörer" Rezo: Es kamen "Diskreditierung, Lügen, Trump-Wordings und keine inhaltliche Auseinandersetzung" [22. Mai 2019]. Interview mit Rezo. Unter: https://www.heise.de/tp/features/CDU-Zerstoerer-Rezo-Es-kamen-Diskreditierung-Luegen-Trump-Wordings-und-keine-inhaltliche-4428522.html?view=print [http://www.heise.de/-4428522].
  2. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.053846546 = product of:
      0.10769309 = sum of:
        0.10769309 = product of:
          0.32307926 = sum of:
            0.32307926 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32307926 = score(doc=230,freq=2.0), product of:
                0.43114176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050854117 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  3. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.03
    0.034694087 = product of:
      0.06938817 = sum of:
        0.06938817 = sum of:
          0.04871808 = weight(_text_:90 in 3035) [ClassicSimilarity], result of:
            0.04871808 = score(doc=3035,freq=2.0), product of:
              0.2733978 = queryWeight, product of:
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.050854117 = queryNorm
              0.17819485 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
          0.020670092 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
            0.020670092 = score(doc=3035,freq=2.0), product of:
              0.17808245 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050854117 = queryNorm
              0.116070345 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
      0.5 = coord(1/2)
    
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  4. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024359938 = product of:
      0.048719876 = sum of:
        0.048719876 = product of:
          0.09743975 = sum of:
            0.09743975 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09743975 = score(doc=3925,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  5. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.024359938 = product of:
      0.048719876 = sum of:
        0.048719876 = product of:
          0.09743975 = sum of:
            0.09743975 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.09743975 = score(doc=3582,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  6. Chowdhury, A.; Mccabe, M.C.: Improving information retrieval systems using part of speech tagging (1993) 0.02
    0.02435904 = product of:
      0.04871808 = sum of:
        0.04871808 = product of:
          0.09743616 = sum of:
            0.09743616 = weight(_text_:90 in 1061) [ClassicSimilarity], result of:
              0.09743616 = score(doc=1061,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.3563897 = fieldWeight in 1061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The object of Information Retrieval is to retrieve all relevant documents for a user query and only those relevant documents. Much research has focused on achieving this objective with little regard for storage overhead or performance. In the paper we evaluate the use of Part of Speech Tagging to improve, the index storage overhead and general speed of the system with only a minimal reduction to precision recall measurements. We tagged 500Mbs of the Los Angeles Times 1990 and 1989 document collection provided by TREC for parts of speech. We then experimented to find the most relevant part of speech to index. We show that 90% of precision recall is achieved with 40% of the document collections terms. We also show that this is a improvement in overhead with only a 1% reduction in precision recall.
  7. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.024115108 = product of:
      0.048230216 = sum of:
        0.048230216 = product of:
          0.09646043 = sum of:
            0.09646043 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09646043 = score(doc=6021,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  8. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.020670092 = product of:
      0.041340183 = sum of:
        0.041340183 = product of:
          0.08268037 = sum of:
            0.08268037 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.08268037 = score(doc=3895,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  9. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.020670092 = product of:
      0.041340183 = sum of:
        0.041340183 = product of:
          0.08268037 = sum of:
            0.08268037 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.08268037 = score(doc=4156,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  10. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.02
    0.020299202 = product of:
      0.040598404 = sum of:
        0.040598404 = product of:
          0.08119681 = sum of:
            0.08119681 = weight(_text_:90 in 5309) [ClassicSimilarity], result of:
              0.08119681 = score(doc=5309,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.29699144 = fieldWeight in 5309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  11. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5865,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  12. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5846,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:40
  13. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5906,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 5.2020 17:22:28
  14. Arndt, O.: Totale Telematik (2020) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5907,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5907)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:11:24
  15. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.06890031 = score(doc=82,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2020 19:16:24
  16. Baecker, D.: ¬Der Frosch, die Fliege und der Mensch : zum Tod von Humberto Maturana (2021) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.06890031 = score(doc=236,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 5.2021 22:10:24
  17. Eyert, F.: Mathematische Wissenschaftskommunikation in der digitalen Gesellschaft (2023) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
              0.06890031 = score(doc=1001,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.22-25
  18. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.017051958 = product of:
      0.034103915 = sum of:
        0.034103915 = product of:
          0.06820783 = sum of:
            0.06820783 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.06820783 = score(doc=3450,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  19. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.02
    0.016239362 = product of:
      0.032478724 = sum of:
        0.032478724 = product of:
          0.06495745 = sum of:
            0.06495745 = weight(_text_:90 in 4639) [ClassicSimilarity], result of:
              0.06495745 = score(doc=4639,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.23759314 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Isbn
    978-90-8659-483-2
  20. Menge-Sonnentag, R.: Google veröffentlicht einen Parser für natürliche Sprache (2016) 0.02
    0.016239362 = product of:
      0.032478724 = sum of:
        0.032478724 = product of:
          0.06495745 = sum of:
            0.06495745 = weight(_text_:90 in 2941) [ClassicSimilarity], result of:
              0.06495745 = score(doc=2941,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.23759314 = fieldWeight in 2941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2941)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    SyntaxNet nutzt zur Entscheidung neuronale Netze und versucht die Abhängigkeiten richtig zuzuordnen. Damit "lernt" der Parser, dass es schwierig ist, Sonnenblumenkerne zum Schneiden einzusetzen, und sie somit wohl eher Bestandteil des Brots als ein Werkzeug sind. Die Analyse beschränkt sich jedoch auf den Satz selbst. Semantische Zusammenhänge berücksichtigt das Modell nicht. So lösen sich manche Mehrdeutigkeiten durch den Kontext auf: Wenn Alice im obigen Beispiel das Fernglas beim Verlassen des Hauses eingepackt hat, wird sie es vermutlich benutzen. Trefferquote Mensch vs. Maschine Laut dem Blog-Beitrag kommt Parsey McParseface auf eine Genauigkeit von gut 94 Prozent für Sätze aus dem Penn Treebank Project. Die menschliche Quote soll laut Linguisten bei 96 bis 97 Prozent liegen. Allerdings weist der Beitrag auch darauf hin, dass es sich bei den Testsätzen um wohlgeformte Texte handelt. Im Test mit Googles WebTreebank erreicht der Parser eine Genauigkeit von knapp 90 Prozent."

Years

Languages

  • d 42
  • e 32
  • a 1
  • More… Less…