Search (6 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Information"
  • × type_ss:"el"
  1. Gigliotti, C.: What children and animals know that we don't (1995) 0.04
    0.043190803 = product of:
      0.08638161 = sum of:
        0.08638161 = product of:
          0.1295724 = sum of:
            0.07055833 = weight(_text_:i in 3290) [ClassicSimilarity], result of:
              0.07055833 = score(doc=3290,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.41672117 = fieldWeight in 3290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3290)
            0.05901407 = weight(_text_:c in 3290) [ClassicSimilarity], result of:
              0.05901407 = score(doc=3290,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.381109 = fieldWeight in 3290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3290)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    "In this essay, I offer several significant examples of research that deal with animals' and children's perception. These examples come from social science, cognitive thology, and several camps in cognitive science"
  2. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.02
    0.019501295 = product of:
      0.03900259 = sum of:
        0.03900259 = product of:
          0.11700776 = sum of:
            0.11700776 = weight(_text_:i in 1234) [ClassicSimilarity], result of:
              0.11700776 = score(doc=1234,freq=22.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.6910539 = fieldWeight in 1234, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1234)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    I have become "brutally honest" of late, at least according to one listener who heard my remarks during a recent whistle stop speaking tour of publishing conventions. This comment caught me a little off guard. Not that I haven't always been frank, but I do try never to be brutal. The truth, I guess, can be painful, even if the intention of the teller is simply objectivity. This paper is based on a "brutally honest" talk I have been giving to publishers, first, in February, to the Association of American Publishers' Professional and Scholarly Publishing Division, at which point I was calling the piece, "The Illusion of Free Information." It was this initial rendition that led to the invitation to publish something here. Since then I've been working on the talk. I gave a second version of it in March to the assembly of the American Society of Information Dissemination Centers, where I called it, "When Sectors Clash: Public Access vs. Private Interest." And, most recently, I gave yet a third version of it to the governing board of the American Institute of Physics. This time I called it: "The Future of Society Publishing." The notion of free information, our government's proper role in distributing free information, and the future of scholarly publishing in a world of free information . . . these are the issues that are floating around in my head. My goal here is to tell you where my thinking is only at this moment, for I reserve the right to continue thinking and developing new permutations on this mentally challenging theme.
  3. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.006082151 = product of:
      0.012164302 = sum of:
        0.012164302 = product of:
          0.036492907 = sum of:
            0.036492907 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.036492907 = score(doc=479,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
  4. Enmark, R.: ¬The non-existent point : on the subject of defining library and information science and the concept of information (1998) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 2027) [ClassicSimilarity], result of:
              0.035279166 = score(doc=2027,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 2027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2027)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The primary purpose of this essay if the following: to criticise a discipline-defining concept of information that has its poit of departure in the uncomplicated cognitive metaphor's 'subject/object relationship'. In my understanding, the cognitive channel metaphor is equal to the sender/receiver model, with the addition of the receiver's understanding, as both physical and mental aspects are used in one and the same metaphor: the 'subject' so to speak meets the 'object'. In this essay I will state: (1) that the point at which the 'subject' specifically meets the 'object' does not exist; (2) that the study of that which the non-existing point symbolises is impossible to describe on an general level without becoming trivial; (3) that it is not possible to find an obvious relationship between the sender's statement and the receiver's understanding; and (4) that the study of the 'subject' and the study of the 'object' exist in different methodological and theoretical dimensions: This leads to the conclusion that the cognitive channel metaphorical definition of the discipline of library and information science must preferably be abandoned and that this should take place such: (1) that consideration is taken to the empirical research that is carried out in library and information science and (2) that the research removes itself from the profession's legitimate ambitions for usefulness
  5. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.00
    0.0041729254 = product of:
      0.008345851 = sum of:
        0.008345851 = product of:
          0.025037551 = sum of:
            0.025037551 = weight(_text_:c in 3380) [ClassicSimilarity], result of:
              0.025037551 = score(doc=3380,freq=4.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.16169086 = fieldWeight in 3380, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3380)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0024589198 = product of:
      0.0049178395 = sum of:
        0.0049178395 = product of:
          0.014753518 = sum of:
            0.014753518 = weight(_text_:c in 1182) [ClassicSimilarity], result of:
              0.014753518 = score(doc=1182,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.09527725 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.