Search (27 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07260546 = sum of:
      0.05413397 = product of:
        0.21653588 = sum of:
          0.21653588 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21653588 = score(doc=562,freq=2.0), product of:
              0.38528278 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04544495 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018471489 = product of:
        0.036942977 = sum of:
          0.036942977 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036942977 = score(doc=562,freq=2.0), product of:
              0.15914047 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04544495 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.05
    0.046341278 = product of:
      0.092682555 = sum of:
        0.092682555 = sum of:
          0.049582418 = weight(_text_:g in 5483) [ClassicSimilarity], result of:
            0.049582418 = score(doc=5483,freq=2.0), product of:
              0.17068884 = queryWeight, product of:
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.04544495 = queryNorm
              0.29048425 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
          0.04310014 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
            0.04310014 = score(doc=5483,freq=2.0), product of:
              0.15914047 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04544495 = queryNorm
              0.2708308 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  3. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.04
    0.039721094 = product of:
      0.07944219 = sum of:
        0.07944219 = sum of:
          0.042499214 = weight(_text_:g in 4436) [ClassicSimilarity], result of:
            0.042499214 = score(doc=4436,freq=2.0), product of:
              0.17068884 = queryWeight, product of:
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.04544495 = queryNorm
              0.24898648 = fieldWeight in 4436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
          0.036942977 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
            0.036942977 = score(doc=4436,freq=2.0), product of:
              0.15914047 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04544495 = queryNorm
              0.23214069 = fieldWeight in 4436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  4. Humphreys, K.; Demetriou, G.; Gaizauskas, R.: Bioinformatics applications of information extraction from scientific journal articles (2000) 0.02
    0.024791209 = product of:
      0.049582418 = sum of:
        0.049582418 = product of:
          0.099164836 = sum of:
            0.099164836 = weight(_text_:g in 4545) [ClassicSimilarity], result of:
              0.099164836 = score(doc=4545,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.5809685 = fieldWeight in 4545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Hull, D.; Ait-Mokhtar, S.; Chuat, M.; Eisele, A.; Gaussier, E.; Grefenstette, G.; Isabelle, P.; Samulesson, C.; Segand, F.: Language technologies and patent search and classification (2001) 0.02
    0.021249607 = product of:
      0.042499214 = sum of:
        0.042499214 = product of:
          0.08499843 = sum of:
            0.08499843 = weight(_text_:g in 6318) [ClassicSimilarity], result of:
              0.08499843 = score(doc=6318,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.49797297 = fieldWeight in 6318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6318)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.02
    0.021249607 = product of:
      0.042499214 = sum of:
        0.042499214 = product of:
          0.08499843 = sum of:
            0.08499843 = weight(_text_:g in 6386) [ClassicSimilarity], result of:
              0.08499843 = score(doc=6386,freq=8.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.49797297 = fieldWeight in 6386, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  7. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.018471489 = product of:
      0.036942977 = sum of:
        0.036942977 = product of:
          0.073885955 = sum of:
            0.073885955 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.073885955 = score(doc=5429,freq=2.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  8. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.02
    0.017708007 = product of:
      0.035416014 = sum of:
        0.035416014 = product of:
          0.07083203 = sum of:
            0.07083203 = weight(_text_:g in 5863) [ClassicSimilarity], result of:
              0.07083203 = score(doc=5863,freq=8.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.4149775 = fieldWeight in 5863, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  9. Rahmstorf, G.: Wortmodell und Begriffssprache als Basis des semantischen Retrievals (2000) 0.02
    0.017530032 = product of:
      0.035060063 = sum of:
        0.035060063 = product of:
          0.070120126 = sum of:
            0.070120126 = weight(_text_:g in 5484) [ClassicSimilarity], result of:
              0.070120126 = score(doc=5484,freq=4.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.41080675 = fieldWeight in 5484, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5484)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  10. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.015392908 = product of:
      0.030785816 = sum of:
        0.030785816 = product of:
          0.06157163 = sum of:
            0.06157163 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06157163 = score(doc=5428,freq=2.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  11. Rahmstorf, G.: Rückkehr von Ordnung in die Informationstechnik? (2000) 0.02
    0.015025741 = product of:
      0.030051483 = sum of:
        0.030051483 = product of:
          0.060102966 = sum of:
            0.060102966 = weight(_text_:g in 5504) [ClassicSimilarity], result of:
              0.060102966 = score(doc=5504,freq=4.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.35212007 = fieldWeight in 5504, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5504)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information und Öffentlichkeit: 1. Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI), Leipzig, 20.-23.3.2000. Zugleich 90. Deutscher Bibliothekartag, 52. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI). Hrsg.: G. Ruppelt u. H. Neißer
  12. Benoit, G.: Data discretization for novel relationship discovery in information retrieval (2002) 0.01
    0.0141664045 = product of:
      0.028332809 = sum of:
        0.028332809 = product of:
          0.056665618 = sum of:
            0.056665618 = weight(_text_:g in 5197) [ClassicSimilarity], result of:
              0.056665618 = score(doc=5197,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.331982 = fieldWeight in 5197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.010884429 = product of:
      0.021768859 = sum of:
        0.021768859 = product of:
          0.043537717 = sum of:
            0.043537717 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.043537717 = score(doc=2541,freq=4.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  14. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.010775035 = product of:
      0.02155007 = sum of:
        0.02155007 = product of:
          0.04310014 = sum of:
            0.04310014 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04310014 = score(doc=156,freq=2.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  15. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.010775035 = product of:
      0.02155007 = sum of:
        0.02155007 = product of:
          0.04310014 = sum of:
            0.04310014 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04310014 = score(doc=3840,freq=2.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  16. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.010775035 = product of:
      0.02155007 = sum of:
        0.02155007 = product of:
          0.04310014 = sum of:
            0.04310014 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.04310014 = score(doc=4184,freq=2.0), product of:
                0.15914047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544495 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 10:38:28
  17. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.010624804 = product of:
      0.021249607 = sum of:
        0.021249607 = product of:
          0.042499214 = sum of:
            0.042499214 = weight(_text_:g in 5480) [ClassicSimilarity], result of:
              0.042499214 = score(doc=5480,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.24898648 = fieldWeight in 5480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  18. Heyer, G.; Läuter, M.; Quasthoff, U.; Wolff, C.: Texttechnologische Anwendungen am Beispiel Text Mining (2000) 0.01
    0.010624804 = product of:
      0.021249607 = sum of:
        0.021249607 = product of:
          0.042499214 = sum of:
            0.042499214 = weight(_text_:g in 5565) [ClassicSimilarity], result of:
              0.042499214 = score(doc=5565,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.24898648 = fieldWeight in 5565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5565)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Ferret, O.; Grau, B.; Hurault-Plantet, M.; Illouz, G.; Jacquemin, C.; Monceaux, L.; Robba, I.; Vilnat, A.: How NLP can improve question answering (2002) 0.01
    0.010624804 = product of:
      0.021249607 = sum of:
        0.021249607 = product of:
          0.042499214 = sum of:
            0.042499214 = weight(_text_:g in 1850) [ClassicSimilarity], result of:
              0.042499214 = score(doc=1850,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.24898648 = fieldWeight in 1850, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Erbach, G.: Sprachdialogsysteme für Telefondienste : Stand der Technik und zukünftige Entwicklungen (2000) 0.01
    0.008854004 = product of:
      0.017708007 = sum of:
        0.017708007 = product of:
          0.035416014 = sum of:
            0.035416014 = weight(_text_:g in 5556) [ClassicSimilarity], result of:
              0.035416014 = score(doc=5556,freq=2.0), product of:
                0.17068884 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.04544495 = queryNorm
                0.20748875 = fieldWeight in 5556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5556)
          0.5 = coord(1/2)
      0.5 = coord(1/2)