Search (6 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Information"
  • × type_ss:"el"
  1. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.03
    0.028360605 = sum of:
      0.008989646 = product of:
        0.035958584 = sum of:
          0.035958584 = weight(_text_:authors in 3380) [ClassicSimilarity], result of:
            0.035958584 = score(doc=3380,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.15110476 = fieldWeight in 3380, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3380)
        0.25 = coord(1/4)
      0.01937096 = product of:
        0.03874192 = sum of:
          0.03874192 = weight(_text_:p in 3380) [ClassicSimilarity], result of:
            0.03874192 = score(doc=3380,freq=6.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.2064181 = fieldWeight in 3380, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3380)
        0.5 = coord(1/2)
    
    Abstract
    Philosophy "done informationally" has been around a long time, but PI as a discipline is quite new. PI takes age-old philosophical debates and engages them with up-to-the minute conceptual issues generated by our ever-changing, information-laden world. This alters the philosophical debates, and makes them interesting to many more people - including many philosophically-minded people who aren't subscribing philosophers. We, the authors, are young researchers who think of our work as part of PI, taking this engaged approach. We're excited by it and want to teach it. Students are excited by it and want to study it. Writing a traditional textbook takes a while, and PI is moving quickly. A traditional textbook doesn't seem like the right approach for the philosophy of the information age. So we got together to take a new approach, team-writing this electronic text to make it available more rapidly and openly.
    Content
    Vgl. auch unter: http://www.socphilinfo.org/teaching/book-pi-intro: "This book serves as the main reference for an undergraduate course on Philosophy of Information. The book is written to be accessible to the typical undergraduate student of Philosophy and does not require propaedeutic courses in Logic, Epistemology or Ethics. Each chapter includes a rich collection of references for the student interested in furthering her understanding of the topics reviewed in the book. The book covers all the main topics of the Philosophy of Information and it should be considered an overview and not a comprehensive, in-depth analysis of a philosophical area. As a consequence, 'The Philosophy of Information: a Simple Introduction' does not contain research material as it is not aimed at graduate students or researchers. The book is available for free in multiple formats and it is updated every twelve months by the team of the p Research Network: Patrick Allo, Bert Baumgaertner, Anthony Beavers, Simon D'Alfonso, Penny Driscoll, Luciano Floridi, Nir Fresco, Carson Grubaugh, Phyllis Illari, Eric Kerr, Giuseppe Primiero, Federica Russo, Christoph Schulz, Mariarosaria Taddeo, Matteo Turilli, Orlin Vakarelov. (*) The version for 2013 is now available as a pdf. The content of this version will soon be integrated in the redesign of the teaching-section. The beta-version from last year will provisionally remain accessible through the Table of Content on this page."
  2. Maguire, P.; Maguire, R.: Consciousness is data compression (2010) 0.01
    0.014911771 = product of:
      0.029823542 = sum of:
        0.029823542 = product of:
          0.059647083 = sum of:
            0.059647083 = weight(_text_:p in 4972) [ClassicSimilarity], result of:
              0.059647083 = score(doc=4972,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.31780142 = fieldWeight in 4972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Jackson, R.: Information Literacy and its relationship to cognitive development and reflective judgment (2008) 0.01
    0.014911771 = product of:
      0.029823542 = sum of:
        0.029823542 = product of:
          0.059647083 = sum of:
            0.059647083 = weight(_text_:p in 111) [ClassicSimilarity], result of:
              0.059647083 = score(doc=111,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.31780142 = fieldWeight in 111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    New directions for teaching and learning, 2008, no.114, p.47-61 [https://dr.lib.iastate.edu/handle/20.500.12876/62405]
  4. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.010608619 = product of:
      0.021217238 = sum of:
        0.021217238 = product of:
          0.042434476 = sum of:
            0.042434476 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.042434476 = score(doc=479,freq=2.0), product of:
                0.18279637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052200247 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
  5. Mayes, T.: Hypermedia and cognitive tools (1995) 0.01
    0.010487921 = product of:
      0.020975841 = sum of:
        0.020975841 = product of:
          0.083903365 = sum of:
            0.083903365 = weight(_text_:authors in 3289) [ClassicSimilarity], result of:
              0.083903365 = score(doc=3289,freq=2.0), product of:
                0.23797122 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052200247 = queryNorm
                0.35257778 = fieldWeight in 3289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3289)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Hypermedia and multimedia have been placed rather uncritically at the centre of current developments in learning technology. This paper seeks to ask some fundamental questions about how learning is best supported by hypermedia, and concludes that the most successful aspects are not those normally emphasized. A striking observation is that the best learning experience is enjoyed by hypermedia courseware authors rather that students. This is understandable from a constructivist view of learning, in which the key aim is to engage the learner in carrying out a task which leads to better comprehension. Deep learning is a by-product of comprehension. The paper discusses some approaches to designing software - cognitive tools for learning - which illustrate the constructivist approach
  6. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0046599284 = product of:
      0.009319857 = sum of:
        0.009319857 = product of:
          0.018639714 = sum of:
            0.018639714 = weight(_text_:p in 1182) [ClassicSimilarity], result of:
              0.018639714 = score(doc=1182,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.099312946 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.