Search (42 results, page 1 of 3)

  • × theme_ss:"Information"
  • × type_ss:"el"
  1. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.02
    0.021590449 = product of:
      0.043180898 = sum of:
        0.043180898 = sum of:
          0.005740611 = weight(_text_:a in 479) [ClassicSimilarity], result of:
            0.005740611 = score(doc=479,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.10809815 = fieldWeight in 479, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=479)
          0.037440285 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
            0.037440285 = score(doc=479,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 479, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=479)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
    Type
    a
  2. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.01
    0.01482369 = product of:
      0.02964738 = sum of:
        0.02964738 = sum of:
          0.0046871896 = weight(_text_:a in 3335) [ClassicSimilarity], result of:
            0.0046871896 = score(doc=3335,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.088261776 = fieldWeight in 3335, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=3335)
          0.02496019 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
            0.02496019 = score(doc=3335,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 3335, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3335)
      0.5 = coord(1/2)
    
    Abstract
    Es ist wieder an der Zeit den Begriff "Information" zu aktualisieren beziehungsweise einen Bericht zum Status Quo zu liefern. Information ist der zentrale Gegenstand der Informationswissenschaft und stellt einen der wichtigsten Forschungsgegenstände der Bibliotheks- und Informationswissenschaft dar. Erstaunlicherweise findet jedoch ein stetiger Diskurs, der mit der kritischen Auseinandersetzung und der damit verbundenen Aktualisierung von Konzepten in den Geisteswissensschaften vergleichbar ist, zumindest im deutschsprachigen Raum1 nicht konstant statt. Im Sinne einer theoretischen Grundlagenforschung und zur Erarbeitung einer gemeinsamen begrifflichen Matrix wäre dies aber sicherlich wünschenswert. Bereits im letzten Jahr erschienen in dem von Søren Brier (Siehe "The foundation of LIS in information science and semiotics"2 sowie "Semiotics in Information Science. An Interview with Søren Brier on the application of semiotic theories and the epistemological problem of a transdisciplinary Information Science"3) herausgegebenen Journal "Cybernetics and Human Knowing" acht lesenswerte Stellungnahmen von namhaften Philosophen beziehungsweise Bibliotheks- und Informationswissenschaftlern zum Begriff der Information. Unglücklicherweise ist das Journal "Cybernetics & Human Knowing" in Deutschland schwer zugänglich, da es sich nicht um ein Open-Access-Journal handelt und lediglich von acht deutschen Bibliotheken abonniert wird.4 Aufgrund der schlechten Verfügbarkeit scheint es sinnvoll hier eine ausführliche Besprechung dieser acht Kurzartikel anzubieten.
    Type
    a
  3. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.00
    0.00324456 = product of:
      0.00648912 = sum of:
        0.00648912 = product of:
          0.01297824 = sum of:
            0.01297824 = weight(_text_:a in 4449) [ClassicSimilarity], result of:
              0.01297824 = score(doc=4449,freq=46.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24438578 = fieldWeight in 4449, product of:
                  6.78233 = tf(freq=46.0), with freq of:
                    46.0 = termFreq=46.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
    Type
    a
  4. Gödert, W.; Lepsky, K.: Reception of externalized knowledge : a constructivistic model based on Popper's Three Worlds and Searle's Collective Intentionality (2019) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 5205) [ClassicSimilarity], result of:
              0.012102271 = score(doc=5205,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 5205, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a model for the reception of knowledge from externalized information sources. The model is based on a cognitive understanding of information processing and draws up ideas of an exchange of information in communication processes. Karl Popper's three-world theory with its orientation on falsifiable scientific knowledge is extended by John Searle's concept of collective intentionality. This allows a consistent description of externalization and reception of knowledge including scientific knowledge as well as everyday knowledge.
    Type
    a
  5. Hesse, W.; Verrijn-Stuart, A.: Towards a theory of information systems : the FRISCO approach (1999) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 3059) [ClassicSimilarity], result of:
              0.011717974 = score(doc=3059,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 3059, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information Systems (IS) is among the most widespread terms in the Computer Science field but a well founded, widely accepted theory of IS is still missing. With the Internet publication of the FRISCO report, the IFIP task group "FRamework of Information System COncepts" has taken a first step towards such a theory. Among the major achievements of this report are: (1) it builds on a solid basis formed by semiotics and ontology, (2) it defines a compendium of about 100 core IS concepts in a coherent and consistent way, (3) it goes beyond the common narrow view of information systems as pure technical artefacts by adopting an interdisciplinary, socio-technical view on them. In the autumn of 1999, a first review of the report and its impact was undertaken at the ISCO-4 conference in Leiden. In a workshop specifically devoted to the subject, the original aims and goals of FRISCO were confirmed to be still valid and the overall approach and achievements of the report were acknowledged. On the other hand, the workshop revealed some misconceptions, errors and weaknesses of the report in its present form, which are to be removed through a comprehensive revision now under way. This paper reports on the results of the Leiden conference and the current revision activities. It also points out some important consequences of the FRISCO approach as a whole.
  6. Kuhlen, R.: Informationelle Bildung - Informationelle Kompetenz - Informationelle Autonomie (2000) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 2069) [ClassicSimilarity], result of:
              0.0108246 = score(doc=2069,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 2069, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=2069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    a
  7. Dervin, B.: Chaos, order, and sense-making : a proposed theory for information design (1995) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3291) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3291,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3291, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3291)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The term information design is being offered in this volume as a disignator of a new area of activity. Part of the logic inherent in the presentation is the assumption that as a species we face altered circumstances which demand this new practice
  8. Lehmann, K.: Unser Gehirn kartiert auch Beziehungen räumlich (2015) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 2146) [ClassicSimilarity], result of:
              0.010148063 = score(doc=2146,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 2146, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2146)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Vgl. Original unter: http://www.sciencedirect.com/science/article/pii/S0896627315005243: "Morais Tavares, R., A. Mendelsohn, Y.Grossman, C.H. Williams, M. Shapiro, Y. Trope u. D. Schiller: A Map for Social Navigation in the Human Brain" in. Neuron 87(2015) no.1, S,231-243. [Deciphering the neural mechanisms of social behavior has propelled the growth of social neuroscience. The exact computations of the social brain, however, remain elusive. Here we investigated how the human br ain tracks ongoing changes in social relationships using functional neuroimaging. Participants were lead characters in a role-playing game in which they were to find a new home and a job through interactions with virtual cartoon characters. We found that a two-dimensional geometric model of social relationships, a "social space" framed by power and affiliation, predicted hippocampal activity. Moreover, participants who reported better social skills showed stronger covariance between hippocampal activity and "movement" through "social space." The results suggest that the hippocampus is crucial for social cognition, and imply that beyond framing physical locations, the hippocampus computes a more general, inclusive, abstract, and multidimensional cognitive map consistent with its role in episodic memory.].
    Type
    a
  9. Standage, T.: Information overload is nothing new (2018) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 4473) [ClassicSimilarity], result of:
              0.009567685 = score(doc=4473,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 4473, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4473)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Do you fret about staying on top of a deluge of information? Don't worry, says Tom Standage, Leibniz felt the same
    Content
    "Overflowing inboxes, endlessly topped up by incoming emails. Constant alerts, notifications and text messages on your smartphone and computer. Infinitely scrolling streams of social-media posts. Access to all the music ever recorded, whenever you want it. And a deluge of high-quality television, with new series released every day on Netflix, Amazon Prime and elsewhere. The bounty of the internet is a marvellous thing, but the ever-expanding array of material can leave you feeling overwhelmed, constantly interrupted, unable to concentrate or worried that you are missing out or falling behind. No wonder some people are quitting social media, observing "digital sabbaths" when they unplug from the internet for a day, or buying old-fashioned mobile phones in an effort to avoid being swamped. This phenomenon may seem quintessentially modern, but it dates back centuries, as Ann Blair of Harvard University observes in "Too Much to Know", a history of information overload. Half a millennium ago, the printing press was to blame. "Is there anywhere on Earth exempt from these swarms of new books?" moaned Erasmus in 1525. New titles were appearing in such abundance, thousands every year. How could anyone figure out which ones were worth reading? Overwhelmed scholars across Europe worried that good ideas were being lost amid the deluge. Francisco Sanchez, a Spanish philosopher, complained in 1581 that 10m years was not long enough to read all the books in existence. The German polymath Gottfried Wilhelm Leibniz grumbled in 1680 of "that horrible mass of books which keeps on growing"."
    Type
    a
  10. Mayes, T.: Hypermedia and cognitive tools (1995) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 3289) [ClassicSimilarity], result of:
              0.009471525 = score(doc=3289,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 3289, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Hypermedia and multimedia have been placed rather uncritically at the centre of current developments in learning technology. This paper seeks to ask some fundamental questions about how learning is best supported by hypermedia, and concludes that the most successful aspects are not those normally emphasized. A striking observation is that the best learning experience is enjoyed by hypermedia courseware authors rather that students. This is understandable from a constructivist view of learning, in which the key aim is to engage the learner in carrying out a task which leads to better comprehension. Deep learning is a by-product of comprehension. The paper discusses some approaches to designing software - cognitive tools for learning - which illustrate the constructivist approach
  11. Grant, S.: Developing cognitive architecture for modelling and simulation of cognition and error in complex tasks (1995) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 3288) [ClassicSimilarity], result of:
              0.009374379 = score(doc=3288,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 3288, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3288)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A cognitive architecture embodies the more general structures and mechnaisms out of which could be made a model of individual cognition in certain situation. The space of models and architectures has a number of dimensions, including: dependence on domain; level of specification; and extent of coverage of different phenomena
  12. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    0.0022770413 = product of:
      0.0045540826 = sum of:
        0.0045540826 = product of:
          0.009108165 = sum of:
            0.009108165 = weight(_text_:a in 1182) [ClassicSimilarity], result of:
              0.009108165 = score(doc=1182,freq=58.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17151064 = fieldWeight in 1182, product of:
                  7.615773 = tf(freq=58.0), with freq of:
                    58.0 = termFreq=58.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
    Although the Alexandria Digital Library provides far richer data than the TGN (5.9 vs. 1.3 million names), its added size lowers, rather than increases, the accuracy of most geographic name identification systems for historical documents: most of the extra 4.6 million names cover low frequency entities that rarely occur in any particular corpus. The TGN is sufficiently comprehensive to provide quite enough noise: we find place names that are used over and over (there are almost one hundred Washingtons) and semantically ambiguous (e.g., is Washington a person or a place?). Comprehensive knowledge sources emphasize recall but lower precision. We need data with which to determine which "Tribune" or "John Brown" a particular passage denotes. Secondly and paradoxically, our reference works may not be comprehensive enough. Human actors come and go over time. Organizations appear and vanish. Even places can change their names or vanish. The TGN does associate the obsolete name Siam with the nation of Thailand (tgn,1000142) - but also with towns named Siam in Iowa (tgn,2035651), Tennessee (tgn,2101519), and Ohio (tgn,2662003). Prussia appears but as a general region (tgn,7016786), with no indication when or if it was a sovereign nation. And if places do point to the same object over time, that object may have very different significance over time: in the foundational works of Western historiography, Herodotus reminds us that the great cities of the past may be small today, and the small cities of today great tomorrow (Hdt. 1.5), while Thucydides stresses that we cannot estimate the past significance of a place by its appearance today (Thuc. 1.10). In other words, we need to know the population figures for the various Washingtons in 1870 if we are analyzing documents from 1870. The foundations have been laid for reference works that provide machine actionable information about entities at particular times in history. The Alexandria Digital Library Gazetteer Content Standard8 represents a sophisticated framework with which to create such resources: places can be associated with temporal information about their foundation (e.g., Washington, DC, founded on 16 July 1790), changes in names for the same location (e.g., Saint Petersburg to Leningrad and back again), population figures at various times and similar historically contingent data. But if we have the software and the data structures, we do not yet have substantial amounts of historical content such as plentiful digital gazetteers, encyclopedias, lexica, grammars and other reference works to illustrate many periods and, even if we do, those resources may not be in a useful form: raw OCR output of a complex lexicon or gazetteer may have so many errors and have captured so little of the underlying structure that the digital resource is useless as a knowledge base. Put another way, human beings are still much better at reading and interpreting the contents of page images than machines. While people, places, and dates are probably the most important core entities, we will find a growing set of objects that we need to identify and track across collections, and each of these categories of objects will require its own knowledge sources. The following section enumerates and briefly describes some existing categories of documents that we need to mine for knowledge. This brief survey focuses on the format of print sources (e.g., highly structured textual "database" vs. unstructured text) to illustrate some of the challenges involved in converting our published knowledge into semantically annotated, machine actionable form.
    Type
    a
  13. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 1234) [ClassicSimilarity], result of:
              0.00894975 = score(doc=1234,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 1234, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I have become "brutally honest" of late, at least according to one listener who heard my remarks during a recent whistle stop speaking tour of publishing conventions. This comment caught me a little off guard. Not that I haven't always been frank, but I do try never to be brutal. The truth, I guess, can be painful, even if the intention of the teller is simply objectivity. This paper is based on a "brutally honest" talk I have been giving to publishers, first, in February, to the Association of American Publishers' Professional and Scholarly Publishing Division, at which point I was calling the piece, "The Illusion of Free Information." It was this initial rendition that led to the invitation to publish something here. Since then I've been working on the talk. I gave a second version of it in March to the assembly of the American Society of Information Dissemination Centers, where I called it, "When Sectors Clash: Public Access vs. Private Interest." And, most recently, I gave yet a third version of it to the governing board of the American Institute of Physics. This time I called it: "The Future of Society Publishing." The notion of free information, our government's proper role in distributing free information, and the future of scholarly publishing in a world of free information . . . these are the issues that are floating around in my head. My goal here is to tell you where my thinking is only at this moment, for I reserve the right to continue thinking and developing new permutations on this mentally challenging theme.
    Type
    a
  14. Bawden, D.; Robinson, L.: Information and the gaining of understanding (2015) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 893) [ClassicSimilarity], result of:
              0.008202582 = score(doc=893,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 893, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=893)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is suggested that, in addition to data, information and knowledge, the information sciences should focus on understanding, understood as a higher-order knowledge, with coherent and explanatory potential. The limited ways in which understanding has been addressed in the design of information systems, in studies of information behaviour, in formulations of information literacy and in impact studies are briefly reviewed, and future prospects considered. The paper is an extended version of a keynote presentation given at the i3 conference in June 2015.
    Type
    a
  15. Andersen, J.: Analyzing the role of knowledge organization in scholarly communication : an inquiry into the intellectual foundation of knowledge organization (2004) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2348) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2348,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2348, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2348)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A publication on the foundation of knowledge organization
  16. Korthof, G.: Information Content, Compressibility and Meaning : Published: 18 June 2000. Updated 31 May 2006. Postscript 20 Oct 2009. (2000) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4245) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4245,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4245, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In New Scientist issue 18 Sept 1999, "Life force" pp27-30 Paul Davies writes "an apparently random sequence such as 110101001010010111... cannot be condensed into a simple set of instructions, so it has a high information content." (p29). This notion of 'information content' leads to paradoxes. Consider random number generator software. Let it generate 100 and 1000 random numbers. According to the above definition the second sequence of numbers has an information content ten times higher than the first, because its description would be ten times longer. However they are both generated by the same simple set of instructions, so should have exactly the same 'information content'. There is the paradox. It seems clear that this measure of 'information content' misses the point. It measures compressibility of a sequence, not 'information content'. One needs meaning of a sequence to capture information content.
  17. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 3380) [ClassicSimilarity], result of:
              0.008118451 = score(doc=3380,freq=32.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 3380, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3380)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In April 2010, Bill Gates gave a talk at MIT in which he asked: 'are the brightest minds working on the most important problems?' Gates meant improving the lives of the poorest; improving education, health, and nutrition. We could easily add improving peaceful interactions, human rights, environmental conditions, living standards and so on. Philosophy of Information (PI) proponents think that Gates has a point - but this doesn't mean we should all give up philosophy. Philosophy can be part of this project, because philosophy understood as conceptual design forges and refines the new ideas, theories, and perspectives that we need to understand and address these important problems that press us so urgently. Of course, this naturally invites us to wonder which ideas, theories, and perspectives philosophers should be designing now. In our global information society, many crucial challenges are linked to information and communication technologies: the constant search for novel solutions and improvements demands, in turn, changing conceptual resources to understand and cope with them. Rapid technological development now pervades communication, education, work, entertainment, industrial production and business, healthcare, social relations and armed conflicts. There is a rich mine of philosophical work to do on the new concepts created right here, right now.
    Philosophy "done informationally" has been around a long time, but PI as a discipline is quite new. PI takes age-old philosophical debates and engages them with up-to-the minute conceptual issues generated by our ever-changing, information-laden world. This alters the philosophical debates, and makes them interesting to many more people - including many philosophically-minded people who aren't subscribing philosophers. We, the authors, are young researchers who think of our work as part of PI, taking this engaged approach. We're excited by it and want to teach it. Students are excited by it and want to study it. Writing a traditional textbook takes a while, and PI is moving quickly. A traditional textbook doesn't seem like the right approach for the philosophy of the information age. So we got together to take a new approach, team-writing this electronic text to make it available more rapidly and openly.
    Content
    Vgl. auch unter: http://www.socphilinfo.org/teaching/book-pi-intro: "This book serves as the main reference for an undergraduate course on Philosophy of Information. The book is written to be accessible to the typical undergraduate student of Philosophy and does not require propaedeutic courses in Logic, Epistemology or Ethics. Each chapter includes a rich collection of references for the student interested in furthering her understanding of the topics reviewed in the book. The book covers all the main topics of the Philosophy of Information and it should be considered an overview and not a comprehensive, in-depth analysis of a philosophical area. As a consequence, 'The Philosophy of Information: a Simple Introduction' does not contain research material as it is not aimed at graduate students or researchers. The book is available for free in multiple formats and it is updated every twelve months by the team of the p Research Network: Patrick Allo, Bert Baumgaertner, Anthony Beavers, Simon D'Alfonso, Penny Driscoll, Luciano Floridi, Nir Fresco, Carson Grubaugh, Phyllis Illari, Eric Kerr, Giuseppe Primiero, Federica Russo, Christoph Schulz, Mariarosaria Taddeo, Matteo Turilli, Orlin Vakarelov. (*) The version for 2013 is now available as a pdf. The content of this version will soon be integrated in the redesign of the teaching-section. The beta-version from last year will provisionally remain accessible through the Table of Content on this page."
  18. Atran, S.: Basic conceptual domains (1989) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 478) [ClassicSimilarity], result of:
              0.008118451 = score(doc=478,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=478)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Speer, A.: Wovon lebt der Geist? (2016) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 3115) [ClassicSimilarity], result of:
              0.007654148 = score(doc=3115,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 3115, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3115)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Maguire, P.; Maguire, R.: Consciousness is data compression (2010) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 4972) [ClassicSimilarity], result of:
              0.007654148 = score(doc=4972,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 4972, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article we advance the conjecture that conscious awareness is equivalent to data compression. Algorithmic information theory supports the assertion that all forms of understanding are contingent on compression (Chaitin, 2007). Here, we argue that the experience people refer to as consciousness is the particular form of understanding that the brain provides. We therefore propose that the degree of consciousness of a system can be measured in terms of the amount of data compression it carries out.
    Type
    a