Search (10 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Information"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.02
    0.021857034 = product of:
      0.04371407 = sum of:
        0.04371407 = sum of:
          0.006302999 = weight(_text_:e in 479) [ClassicSimilarity], result of:
            0.006302999 = score(doc=479,freq=2.0), product of:
              0.06614887 = queryWeight, product of:
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.04602077 = queryNorm
              0.09528506 = fieldWeight in 479, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.046875 = fieldNorm(doc=479)
          0.03741107 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
            0.03741107 = score(doc=479,freq=2.0), product of:
              0.1611569 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04602077 = queryNorm
              0.23214069 = fieldWeight in 479, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=479)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
    Language
    e
  2. Atran, S.: Basic conceptual domains (1989) 0.00
    0.0031514994 = product of:
      0.006302999 = sum of:
        0.006302999 = product of:
          0.012605998 = sum of:
            0.012605998 = weight(_text_:e in 478) [ClassicSimilarity], result of:
              0.012605998 = score(doc=478,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.19057012 = fieldWeight in 478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.09375 = fieldNorm(doc=478)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  3. Maguire, P.; Maguire, R.: Consciousness is data compression (2010) 0.00
    0.0021009997 = product of:
      0.0042019994 = sum of:
        0.0042019994 = product of:
          0.008403999 = sum of:
            0.008403999 = weight(_text_:e in 4972) [ClassicSimilarity], result of:
              0.008403999 = score(doc=4972,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.12704675 = fieldWeight in 4972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  4. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.00
    0.0021009997 = product of:
      0.0042019994 = sum of:
        0.0042019994 = product of:
          0.008403999 = sum of:
            0.008403999 = weight(_text_:e in 5205) [ClassicSimilarity], result of:
              0.008403999 = score(doc=5205,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.12704675 = fieldWeight in 5205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  5. Jackson, R.: Information Literacy and its relationship to cognitive development and reflective judgment (2008) 0.00
    0.0021009997 = product of:
      0.0042019994 = sum of:
        0.0042019994 = product of:
          0.008403999 = sum of:
            0.008403999 = weight(_text_:e in 111) [ClassicSimilarity], result of:
              0.008403999 = score(doc=111,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.12704675 = fieldWeight in 111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0625 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  6. Bawden, D.; Robinson, L.: Information and the gaining of understanding (2015) 0.00
    0.0018383748 = product of:
      0.0036767495 = sum of:
        0.0036767495 = product of:
          0.007353499 = sum of:
            0.007353499 = weight(_text_:e in 893) [ClassicSimilarity], result of:
              0.007353499 = score(doc=893,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.1111659 = fieldWeight in 893, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=893)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  7. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.00
    0.0013131249 = product of:
      0.0026262498 = sum of:
        0.0026262498 = product of:
          0.0052524996 = sum of:
            0.0052524996 = weight(_text_:e in 1234) [ClassicSimilarity], result of:
              0.0052524996 = score(doc=1234,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.07940422 = fieldWeight in 1234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  8. Standage, T.: Information overload is nothing new (2018) 0.00
    0.0013131249 = product of:
      0.0026262498 = sum of:
        0.0026262498 = product of:
          0.0052524996 = sum of:
            0.0052524996 = weight(_text_:e in 4473) [ClassicSimilarity], result of:
              0.0052524996 = score(doc=4473,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.07940422 = fieldWeight in 4473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4473)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  9. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.0010504998 = product of:
      0.0021009997 = sum of:
        0.0021009997 = product of:
          0.0042019994 = sum of:
            0.0042019994 = weight(_text_:e in 4449) [ClassicSimilarity], result of:
              0.0042019994 = score(doc=4449,freq=2.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.063523374 = fieldWeight in 4449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  10. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    9.2851947E-4 = product of:
      0.0018570389 = sum of:
        0.0018570389 = product of:
          0.0037140779 = sum of:
            0.0037140779 = weight(_text_:e in 1182) [ClassicSimilarity], result of:
              0.0037140779 = score(doc=1182,freq=4.0), product of:
                0.06614887 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.04602077 = queryNorm
                0.056147262 = fieldWeight in 1182, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Language
    e