Search (13 results, page 1 of 1)

  • × theme_ss:"Information"
  • × type_ss:"el"
  1. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.02
    0.018231558 = product of:
      0.036463115 = sum of:
        0.02586502 = weight(_text_:data in 1182) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1182,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1182, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
        0.010598094 = product of:
          0.021196188 = sum of:
            0.021196188 = weight(_text_:processing in 1182) [ClassicSimilarity], result of:
              0.021196188 = score(doc=1182,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.111815326 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
  2. Maguire, P.; Maguire, R.: Consciousness is data compression (2010) 0.02
    0.017919812 = product of:
      0.07167925 = sum of:
        0.07167925 = weight(_text_:data in 4972) [ClassicSimilarity], result of:
          0.07167925 = score(doc=4972,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 4972, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4972)
      0.25 = coord(1/4)
    
    Abstract
    In this article we advance the conjecture that conscious awareness is equivalent to data compression. Algorithmic information theory supports the assertion that all forms of understanding are contingent on compression (Chaitin, 2007). Here, we argue that the experience people refer to as consciousness is the particular form of understanding that the brain provides. We therefore propose that the degree of consciousness of a system can be measured in terms of the amount of data compression it carries out.
  3. Jörs, B.: Über den Grundbegriff der "Information" ist weiter zu reden und über die Existenzberechtigung der Disziplin auch : Wie man mit "Information" umgehen sollte: Das Beispiel der Medienforschung (2020) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 5911) [ClassicSimilarity], result of:
          0.053759433 = score(doc=5911,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 5911, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5911)
      0.25 = coord(1/4)
    
    Content
    Fortsetzung von: Über den Grundbegriff der "Information" ist weiter zu reden und über die Existenzberechtigung der Disziplin auch: die Kapitulation der Informationswissenschaft vor dem eigenen Basisbegriff. Unter: Open Password. 2020, Nr.759 vom 25. Mai 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=Wzk0LCIxODU1NzhmZDQ2ZDAiLDAsMCw4NSwxXQ]. Weitere Fortsetzung als: Informationskompetenz in den Bibliotheken und in der Informationswissenschaft - Das Verlangen nach einer verständlichen Wissenschaftssprache. In: Open Password. 2020, Nr.784 vom 09. Juli 2020. Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzExNiwiM2Y4YjgwNDBiM2QxIiwwLDAsMTA2LDFd.
    Source
    Open Password. 2020, Nr.777 vom 29. Juni 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=Wzk0LCIxODU1NzhmZDQ2ZDAiLDAsMCw4NSwxXQ]
  4. Jörs, B.: Über den Grundbegriff der "Information" ist weiter zu reden und über die Existenzberechtigung der Disziplin auch : die Kapitulation der Informationswissenschaft vor dem eigenen Basisbegriff (2020) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 326) [ClassicSimilarity], result of:
          0.051210128 = score(doc=326,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 326, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=326)
      0.25 = coord(1/4)
    
    Content
    Fortsetzung als: Wie man mit Information umgehen sollte - Das Beispiel der Medienforschung. Unter: Open Password. 2020, Nr.777 vom 29. Juni 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=Wzk0LCIxODU1NzhmZDQ2ZDAiLDAsMCw4NSwxXQ].
    Source
    Open Password. 2020, Nr.759 vom 25. Mai 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=Wzk0LCIxODU1NzhmZDQ2ZDAiLDAsMCw4NSwxXQ]
  5. Jörs, B.: Über den Grundbegriff der "Information" ist weiter zu reden und über die Existenzberechtigung der Disziplin auch : das Verlangen nach einer verständlichen Wissenschaftssprache (2020) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 5684) [ClassicSimilarity], result of:
          0.043894395 = score(doc=5684,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 5684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5684)
      0.25 = coord(1/4)
    
    Content
    Fortsetzung von: Über den Grundbegriff der "Information" ist weiter zu reden und über die Existenzberechtigung der Disziplin auch: Wie man mit "Information" umgehen sollte: Das Beispiel der Medienforschung. Unter: Open Password. 2020, Nr.777 vom 29. Juni 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=Wzk0LCIxODU1NzhmZDQ2ZDAiLDAsMCw4NSwxXQ].
    Source
    Open Password. 2020, Nr.784 vom 09. Juli 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzExNiwiM2Y4YjgwNDBiM2QxIiwwLDAsMTA2LDFd]
  6. Bawden, D.; Robinson, L.: Information and the gaining of understanding (2015) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 893) [ClassicSimilarity], result of:
          0.036211025 = score(doc=893,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
      0.25 = coord(1/4)
    
    Abstract
    It is suggested that, in addition to data, information and knowledge, the information sciences should focus on understanding, understood as a higher-order knowledge, with coherent and explanatory potential. The limited ways in which understanding has been addressed in the design of information systems, in studies of information behaviour, in formulations of information literacy and in impact studies are briefly reviewed, and future prospects considered. The paper is an extended version of a keynote presentation given at the i3 conference in June 2015.
  7. Jörs, B.: ¬Die Informationswissenschaft ist tot, es lebe die Datenwissenschaft (2019) 0.01
    0.008959906 = product of:
      0.035839625 = sum of:
        0.035839625 = weight(_text_:data in 5879) [ClassicSimilarity], result of:
          0.035839625 = score(doc=5879,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24204408 = fieldWeight in 5879, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=5879)
      0.25 = coord(1/4)
    
    Abstract
    "Haben die "Daten" bzw. die "Datenwissenschaft" (Data Science) die "Information" bzw. die Informationswissenschaft obsolet gemacht? Hat die "Data Science" mit ihren KI-gestützten Instrumenten die ökonomische und technische Herrschaft über die "Daten" und damit auch über die "Informationen" und das "Wissen" übernommen? Die meist in der Informatik/Mathematik beheimatete "Data Science" hat die wissenschaftliche Führungsrolle übernommen, "Daten" in "Informationen" und "Wissen" zu transferieren." "Der Wandel von analoger zu digitaler Informationsverarbeitung hat die Informationswissenschaft im Grunde obsolet gemacht. Heute steht die Befassung mit der Kategorie "Daten" und deren kausaler Zusammenhang mit der "Wissens"-Generierung (Erkennung von Mustern und Zusammenhängen, Prognosefähigkeit usw.) und neuronalen Verarbeitung und Speicherung im Zentrum der Forschung." "Wäre die Wissenstreppe nach North auch für die Informationswissenschaft gültig, würde sie erkennen, dass die Befassung mit "Daten" und die durch Vorwissen ermöglichte Interpretation von "Daten" erst die Voraussetzungen schaffen, "Informationen" als "kontextualisierte Daten" zu verstehen, um "Informationen" strukturieren, darstellen, erzeugen und suchen zu können."
  8. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.01
    0.008478476 = product of:
      0.033913903 = sum of:
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 5205) [ClassicSimilarity], result of:
              0.067827806 = score(doc=5205,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 5205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
  9. Hapke, T.: Zu einer ganzheitlichen Informationskompetenz gehört eine kritische Wissenschaftskompetenz : Informationskompetenz und Demokratie (2020) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 5685) [ClassicSimilarity], result of:
          0.031038022 = score(doc=5685,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 5685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5685)
      0.25 = coord(1/4)
    
    Source
    Open Password. 2020, Nr.715 vom 03. März 2020 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzQzLCJkMTY5ZDA2ZGRhMDgiLDYyNjQsIjEyMXR1ZWJudW5zMGtrZ2djZ3d3ZzQ4MHc4ODBrNHNjIiwzNiwwXQ]
  10. Schmid, F.: »Information« ist Syntax, nicht Sinn : Quasisakrale Weltformel (2019) 0.01
    0.0054867994 = product of:
      0.021947198 = sum of:
        0.021947198 = weight(_text_:data in 5289) [ClassicSimilarity], result of:
          0.021947198 = score(doc=5289,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.14822112 = fieldWeight in 5289, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5289)
      0.25 = coord(1/4)
    
    Content
    Doch obwohl das in der Science-Fiction schon tausendmal durchgespielt wurde, gibt es noch immer keinen Computer, der Menschen imitieren kann. Die Rechnerleistungen werden zwar immer größer, aber »dennoch sind die Computer so dumm wie zuvor (.) Sie prozessieren Informationen anhand einer bestimmten und von Menschen eingegebenen Syntax. Mit der Bedeutung, der Semantik, den Ergebnissen können sie nichts anfangen«, schreibt Feustel. Das klassische Beispiel ist der Android Data in der Serie »Star Trek«, der keine Witze versteht, so sehr er sich auch müht - so setzte die Kulturindustrie vor 30 Jahren diesen Vorbehalt in Szene. Heute überwiegen hingegen Plots wie im Film »Lucy« von Luc Besson, in dem Mensch und Maschine als zwei Arten von Informationsflüssen prinzipiell kompatibel sind. Angesichts von Big-Data-Strömen und den »Deep Learning«-Prozessen der viel beschworenen Algorithmen wird allenthalben die Hoffnung - oder Befürchtung - artikuliert, es könne plötzlich eine selbstständige Intelligenz im Netz entstehen, die eben nicht mehr nur syntaktisch verarbeitet, sondern »semantisches« Bewusstsein entwickelt. Die Information könne quasi lebendig werden und als Geist aus der Flasche steigen.
  11. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.00
    0.0047583506 = product of:
      0.019033402 = sum of:
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.038066804 = score(doc=479,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    23. 1.2022 10:22:18
  12. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.004239238 = product of:
      0.016956951 = sum of:
        0.016956951 = product of:
          0.033913903 = sum of:
            0.033913903 = weight(_text_:processing in 4449) [ClassicSimilarity], result of:
              0.033913903 = score(doc=4449,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.17890452 = fieldWeight in 4449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
  13. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.00
    0.003172234 = product of:
      0.012688936 = sum of:
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.025377871 = score(doc=3335,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.5 = coord(1/2)
      0.25 = coord(1/4)