Search (67 results, page 1 of 4)

  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Pany, T.: Konfusion in der Medienrepublik : Der Überraschungseffekt der Youtuber (2019) 0.08
    0.08095287 = product of:
      0.16190574 = sum of:
        0.16190574 = sum of:
          0.11367553 = weight(_text_:90 in 5244) [ClassicSimilarity], result of:
            0.11367553 = score(doc=5244,freq=2.0), product of:
              0.2733978 = queryWeight, product of:
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.050854117 = queryNorm
              0.415788 = fieldWeight in 5244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5244)
          0.048230216 = weight(_text_:22 in 5244) [ClassicSimilarity], result of:
            0.048230216 = score(doc=5244,freq=2.0), product of:
              0.17808245 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050854117 = queryNorm
              0.2708308 = fieldWeight in 5244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5244)
      0.5 = coord(1/2)
    
    Abstract
    Vor der EU-Wahl veröffentlichen 90 "Webstars" eine Wahlempfehlung: "Wählt nicht die CDU/CSU, wählt nicht die SPD und schon gar nicht die AfD". Die Reaktionen sind der eigentliche Aufreger. Bezug zu: https://youtu.be/4Y1lZQsyuSQ und https://youtu.be/Xpg84NjCr9c.
    Content
    Vgl. auch: Dörner, S.:"CDU-Zerstörer" Rezo: Es kamen "Diskreditierung, Lügen, Trump-Wordings und keine inhaltliche Auseinandersetzung" [22. Mai 2019]. Interview mit Rezo. Unter: https://www.heise.de/tp/features/CDU-Zerstoerer-Rezo-Es-kamen-Diskreditierung-Luegen-Trump-Wordings-und-keine-inhaltliche-4428522.html?view=print [http://www.heise.de/-4428522].
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06730819 = product of:
      0.13461637 = sum of:
        0.13461637 = product of:
          0.4038491 = sum of:
            0.4038491 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.4038491 = score(doc=1826,freq=2.0), product of:
                0.43114176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050854117 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Borchers, D.: Missing Link : Wenn der Kasten denkt - Niklas Luhmann und die Folgen (2017) 0.04
    0.040598404 = product of:
      0.08119681 = sum of:
        0.08119681 = product of:
          0.16239361 = sum of:
            0.16239361 = weight(_text_:90 in 2358) [ClassicSimilarity], result of:
              0.16239361 = score(doc=2358,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.5939829 = fieldWeight in 2358, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2358)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Gerade haben die Soziologen den 90. Geburstag des Systemtheoretikers Niklas Luhmann gefeiert. Die Informatiker stecken mitten in einem anspruchsvollen Digitalisierungsprojekt, seinen Gedankenkasten, sein "hölzernes Privat-Internet", zu verdaten.
  4. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.03
    0.034694087 = product of:
      0.06938817 = sum of:
        0.06938817 = sum of:
          0.04871808 = weight(_text_:90 in 3035) [ClassicSimilarity], result of:
            0.04871808 = score(doc=3035,freq=2.0), product of:
              0.2733978 = queryWeight, product of:
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.050854117 = queryNorm
              0.17819485 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.376119 = idf(docFreq=555, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
          0.020670092 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
            0.020670092 = score(doc=3035,freq=2.0), product of:
              0.17808245 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050854117 = queryNorm
              0.116070345 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
      0.5 = coord(1/2)
    
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  5. Wolf, S.: Neuer Meilenstein für BASE : 90 Millionen Dokumente (2016) 0.03
    0.034448884 = product of:
      0.06889777 = sum of:
        0.06889777 = product of:
          0.13779554 = sum of:
            0.13779554 = weight(_text_:90 in 2872) [ClassicSimilarity], result of:
              0.13779554 = score(doc=2872,freq=4.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.50401115 = fieldWeight in 2872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    BASE (https://www.base-search.net) ermöglicht seit Anfang April eine Suche nach über 90 Millionen Dokumenten, deren Metadaten von über 4.200 Dokumentenservern (Repositories) wissenschaftlicher Institutionen weltweit bereit gestellt werden. Damit ist BASE nach Google Scholar die größte Suchmaschine für wissenschaftliche, frei im Internet verfügbare Dokumente. Für über 30 Mio. Dokumente, die in BASE zu finden sind, können wir aufgrund von Informationen in den Metadaten einen Open-Access-Status ausweisen, insgesamt schätzen wir den Open-Access-Anteil derzeit auf 60%. Über ein Boosting-Verfahren werden Nachweise zu Open-Access-Dokumenten bevorzugt angezeigt, ebenso ist ein gezieltes Suchen unter Berücksichtigung verschiedener Lizenz- und Rechteangaben möglich. Der BASE-Index steht über verschiedene Schnittstellen zahlreichen anderen kommerziellen und nicht-kommerziellen Discovery-Systemen, Suchmaschinen, Datenbankanbietern, Fachbibliotheken und Entwicklern zur Nachnutzung zur Verfügung. BASE trägt damit wesentlich zur Nutzung von Inhalten auf Dokumentservern bei. Weitere Informationen: https://www.base-search.net/
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.03
    0.033654094 = product of:
      0.06730819 = sum of:
        0.06730819 = product of:
          0.20192455 = sum of:
            0.20192455 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
              0.20192455 = score(doc=4388,freq=2.0), product of:
                0.43114176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050854117 = queryNorm
                0.46834838 = fieldWeight in 4388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4388)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Miller, C.C.: Google alters search to handle more complex queries (2013) 0.03
    0.028418882 = product of:
      0.056837764 = sum of:
        0.056837764 = product of:
          0.11367553 = sum of:
            0.11367553 = weight(_text_:90 in 2519) [ClassicSimilarity], result of:
              0.11367553 = score(doc=2519,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.415788 = fieldWeight in 2519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2519)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Google on Thursday announced one of the biggest changes to its search engine, a rewriting of its algorithm to handle more complex queries that affects 90 percent of all searches. The change, which represents a new approach to search for Google, required the biggest changes to the company's search algorithm since 2000. Now, Google, the world's most popular search engine, will focus more on trying to understand the meanings of and relationships among things, as opposed to its original strategy of matching keywords.
  8. Kopp, O.: Google Hummingbird-Algorithmus-Update : Infos & Hintergründe (2013) 0.03
    0.028418882 = product of:
      0.056837764 = sum of:
        0.056837764 = product of:
          0.11367553 = sum of:
            0.11367553 = weight(_text_:90 in 2522) [ClassicSimilarity], result of:
              0.11367553 = score(doc=2522,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.415788 = fieldWeight in 2522, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Pünktlich zum 15. Geburtstag der Google Suche verkündete Google gestern auf einer Pressekonferenz in der "Gründungs-Garage", dass das bedeutendste Google Update seit dem Caffeine Update im Jahr 2010 und größte Algorithmus-Update seit 2001 schon seit ca. einem Monat aktiv ist. Das aktuelle Update heißt Hummingbird zu deutsch Kollibri. Es soll ca. 90% aller Suchanfragen betreffen und soll im Vergleich zu Caffeine ein echtes Algorithmus-Update sein. Es soll dabei helfen komplexere Suchanfragen besser zu deuten und noch besser die eigentliche Suchintention bzw. Fragestellung hinter einer Suchanfrage zu erkennen sowie passende Dokumente dazu anzubieten. Auch auf Dokumentenebene soll die eigentliche Intention hinter dem Content besser mit der Suchanfrage gematcht werden.
  9. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.024359938 = product of:
      0.048719876 = sum of:
        0.048719876 = product of:
          0.09743975 = sum of:
            0.09743975 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.09743975 = score(doc=3582,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  10. Hummingbird Neuer Suchalgorithmus bei Google (2013) 0.02
    0.02435904 = product of:
      0.04871808 = sum of:
        0.04871808 = product of:
          0.09743616 = sum of:
            0.09743616 = weight(_text_:90 in 2520) [ClassicSimilarity], result of:
              0.09743616 = score(doc=2520,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.3563897 = fieldWeight in 2520, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2520)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Google hat mit "Hummingbird" einen neuen Suchalgorithmus entwickelt und bereits eingeführt. Dabei handelt es sich laut Google um eine der größten Veränderungen der Suchmaschine, die rund 90 Prozent aller Suchanfragen betrifft. Im Rahmen einer kleinen Veranstaltung zum 15. Geburtstag der Suchmaschine hat Google in die Garage geladen, in der das Unternehmen gegründet wurde. Dabei enthüllte Google eine der bisher größten Veränderungen an der Suchmaschine: Ohne dass Nutzer etwas davon mitbekamen, hat Google vor rund einem Monat seinen Suchalgorithmus ausgetauscht. Der neue Suchalgorithmus mit Codenamen "Hummingbird" soll es Google ermöglichen, Suchanfragen und Beziehungen zwischen Dingen besser zu verstehen. Das soll die Suchmaschine in die Lage versetzen, komplexere Suchanfragen zu verarbeiten, die von Nutzern immer häufiger gestellt werden - auch, weil immer mehr Nutzer Google auf dem Smartphone per Spracheingabe nutzen. Früher versuchte Google lediglich, die Schlüsselwörter in einer Suchanfrage in Webseiten wiederzufinden. Doch seit geraumer Zeit arbeitet Google daran, die Suchanfragen besser zu verstehen, um bessere Suchergebnisse anzuzeigen.
  11. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.02
    0.024115108 = product of:
      0.048230216 = sum of:
        0.048230216 = product of:
          0.09646043 = sum of:
            0.09646043 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.09646043 = score(doc=8365,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2015 16:08:38
  12. Röthler, D.: "Lehrautomaten" oder die MOOC-Vision der späten 60er Jahre (2014) 0.02
    0.020670092 = product of:
      0.041340183 = sum of:
        0.041340183 = product of:
          0.08268037 = sum of:
            0.08268037 = weight(_text_:22 in 1552) [ClassicSimilarity], result of:
              0.08268037 = score(doc=1552,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.46428138 = fieldWeight in 1552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1552)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 11:04:35
  13. Schultz, S.: ¬Die eine App für alles : Mobile Zukunft in China (2016) 0.02
    0.019487951 = product of:
      0.038975902 = sum of:
        0.038975902 = product of:
          0.077951804 = sum of:
            0.077951804 = weight(_text_:22 in 4313) [ClassicSimilarity], result of:
              0.077951804 = score(doc=4313,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.4377287 = fieldWeight in 4313, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2018 14:22:02
  14. Guidi, F.; Sacerdoti Coen, C.: ¬A survey on retrieval of mathematical knowledge (2015) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5865) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5865,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2017 12:51:57
  15. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.06890031 = score(doc=5576,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13.12.2017 14:17:22
  16. Schönherr, M.: Bestechend brillant : die Schönheit der Algorithmen (2016) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 2762) [ClassicSimilarity], result of:
              0.06890031 = score(doc=2762,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 2762, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2762)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 2.2016 17:22:23
  17. Häring, N.; Hensinger, P.: "Digitale Bildung" : Der abschüssige Weg zur Konditionierungsanstalt (2019) 0.02
    0.017225077 = product of:
      0.034450155 = sum of:
        0.034450155 = product of:
          0.06890031 = sum of:
            0.06890031 = weight(_text_:22 in 4999) [ClassicSimilarity], result of:
              0.06890031 = score(doc=4999,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38690117 = fieldWeight in 4999, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4999)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2019 11:45:19
  18. Sojka, P.; Liska, M.: ¬The art of mathematics retrieval (2011) 0.02
    0.017051958 = product of:
      0.034103915 = sum of:
        0.034103915 = product of:
          0.06820783 = sum of:
            0.06820783 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.06820783 = score(doc=3450,freq=4.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.38301262 = fieldWeight in 3450, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl.: DocEng2011, September 19-22, 2011, Mountain View, California, USA Copyright 2011 ACM 978-1-4503-0863-2/11/09
    Date
    22. 2.2017 13:00:42
  19. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.02
    0.016239362 = product of:
      0.032478724 = sum of:
        0.032478724 = product of:
          0.06495745 = sum of:
            0.06495745 = weight(_text_:90 in 4639) [ClassicSimilarity], result of:
              0.06495745 = score(doc=4639,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.23759314 = fieldWeight in 4639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Isbn
    978-90-8659-483-2
  20. Menge-Sonnentag, R.: Google veröffentlicht einen Parser für natürliche Sprache (2016) 0.02
    0.016239362 = product of:
      0.032478724 = sum of:
        0.032478724 = product of:
          0.06495745 = sum of:
            0.06495745 = weight(_text_:90 in 2941) [ClassicSimilarity], result of:
              0.06495745 = score(doc=2941,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.23759314 = fieldWeight in 2941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2941)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    SyntaxNet nutzt zur Entscheidung neuronale Netze und versucht die Abhängigkeiten richtig zuzuordnen. Damit "lernt" der Parser, dass es schwierig ist, Sonnenblumenkerne zum Schneiden einzusetzen, und sie somit wohl eher Bestandteil des Brots als ein Werkzeug sind. Die Analyse beschränkt sich jedoch auf den Satz selbst. Semantische Zusammenhänge berücksichtigt das Modell nicht. So lösen sich manche Mehrdeutigkeiten durch den Kontext auf: Wenn Alice im obigen Beispiel das Fernglas beim Verlassen des Hauses eingepackt hat, wird sie es vermutlich benutzen. Trefferquote Mensch vs. Maschine Laut dem Blog-Beitrag kommt Parsey McParseface auf eine Genauigkeit von gut 94 Prozent für Sätze aus dem Penn Treebank Project. Die menschliche Quote soll laut Linguisten bei 96 bis 97 Prozent liegen. Allerdings weist der Beitrag auch darauf hin, dass es sich bei den Testsätzen um wohlgeformte Texte handelt. Im Test mit Googles WebTreebank erreicht der Parser eine Genauigkeit von knapp 90 Prozent."

Languages

  • d 47
  • e 18
  • a 1
  • More… Less…

Types

  • a 38
  • r 2
  • m 1
  • s 1
  • x 1
  • More… Less…