Search (104 results, page 6 of 6)

  • × theme_ss:"Information"
  • × type_ss:"a"
  1. Weizenbaum, J.: Wir gegen die Gier (2008) 0.00
    6.6399656E-4 = product of:
      0.005975969 = sum of:
        0.005975969 = product of:
          0.011951938 = sum of:
            0.011951938 = weight(_text_:22 in 6983) [ClassicSimilarity], result of:
              0.011951938 = score(doc=6983,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.116070345 = fieldWeight in 6983, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6983)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    16. 3.2008 12:22:08
  2. Bell, G.; Gemmell, J.: Erinnerung total (2007) 0.00
    5.766945E-4 = product of:
      0.0051902505 = sum of:
        0.0051902505 = product of:
          0.010380501 = sum of:
            0.010380501 = weight(_text_:web in 300) [ClassicSimilarity], result of:
              0.010380501 = score(doc=300,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.108171105 = fieldWeight in 300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=300)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    Ein Netz von Pfaden Ein früher Traum von einem maschinell erweiterten Gedächtnis wurde gegen Ende des Zweiten Weltkriegs von Vannevar Bush geäußert. Bush, damals Direktor des Office of Scientific Research and Development (OSRD), das die militärischen Forschungsprogramme der USA koordinierte, und besser bekannt als Erfinder des Analogrechners, stellte 1945 in seinem Aufsatz »As we may think« eine fiktive Maschine namens Memex (Memory Extender, »Gedächtnis-Erweiterer«) vor, die alle Bücher, alle Aufzeichnungen und die gesamte Kommunikation eines Menschen auf Mikrofilm speichern sollte. Das Memex sollte in einem Schreibtisch eingebaut sein und über eine Tastatur, ein Mikrofon und mehrere Bildschirme verfügen. Bush hatte vorgesehen, dass der Benutzer am Schreibtisch mit einer Kamera Fotografien und Dokumente auf Mikrofilm ablichtete oder neue Dokumente erstellte, indem er auf einen berührungsempfindlichen Bildschirm schrieb. Unterwegs sollte eine per Stirnband am Kopf befestigte Kamera das Aufzeichnen übernehmen. Vor allem aber sollte das Memex ähnlich dem menschlichen Gehirn zu assoziativem Denken fähig sein. Bush beschreibt das sehr plastisch: »Kaum hat es einen Begriff erfasst, schon springt es zum nächsten, geleitet von Gedankenassoziationen und entlang einem komplexen Netz von Pfaden, das sich durch die Gehirnzellen zieht.« Im Lauf des folgenden halben Jahrhunderts entwickelten unerschrockene Informatikpioniere, unter ihnen Ted Nelson und Douglas Engelbart, einige dieser Ideen, und die Erfinder des World Wide Web setzten Bushs »Netz von Pfaden« in die Netzstruktur ihrer verlinkten Seiten um. Das Memex selbst blieb jedoch technisch außer Reichweite. Erst in den letzten Jahren haben die rasanten Fortschritte in Speichertechnik, Sensorik und Rechentechnologie den Weg für neue Aufzeichnungs- und Suchtechniken geebnet, die im Endeffekt weit über Bushs Vision hinausgehen könnten."
  3. Hjoerland, B.: ¬The controversy over the concept of information : a rejoinder to Professor Bates (2009) 0.00
    5.533305E-4 = product of:
      0.0049799746 = sum of:
        0.0049799746 = product of:
          0.009959949 = sum of:
            0.009959949 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.009959949 = score(doc=2748,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.09672529 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 3.2009 18:13:27
  4. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    4.805788E-4 = product of:
      0.004325209 = sum of:
        0.004325209 = product of:
          0.008650418 = sum of:
            0.008650418 = weight(_text_:web in 1182) [ClassicSimilarity], result of:
              0.008650418 = score(doc=1182,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.09014259 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.

Years

Languages

  • d 52
  • e 52