Search (5 results, page 1 of 1)

  • × theme_ss:"Information"
  • × type_ss:"el"
  1. Standage, T.: Information overload is nothing new (2018) 0.02
    0.02289688 = product of:
      0.1144844 = sum of:
        0.1144844 = weight(_text_:books in 4473) [ClassicSimilarity], result of:
          0.1144844 = score(doc=4473,freq=6.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.4624449 = fieldWeight in 4473, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4473)
      0.2 = coord(1/5)
    
    Content
    "Overflowing inboxes, endlessly topped up by incoming emails. Constant alerts, notifications and text messages on your smartphone and computer. Infinitely scrolling streams of social-media posts. Access to all the music ever recorded, whenever you want it. And a deluge of high-quality television, with new series released every day on Netflix, Amazon Prime and elsewhere. The bounty of the internet is a marvellous thing, but the ever-expanding array of material can leave you feeling overwhelmed, constantly interrupted, unable to concentrate or worried that you are missing out or falling behind. No wonder some people are quitting social media, observing "digital sabbaths" when they unplug from the internet for a day, or buying old-fashioned mobile phones in an effort to avoid being swamped. This phenomenon may seem quintessentially modern, but it dates back centuries, as Ann Blair of Harvard University observes in "Too Much to Know", a history of information overload. Half a millennium ago, the printing press was to blame. "Is there anywhere on Earth exempt from these swarms of new books?" moaned Erasmus in 1525. New titles were appearing in such abundance, thousands every year. How could anyone figure out which ones were worth reading? Overwhelmed scholars across Europe worried that good ideas were being lost amid the deluge. Francisco Sanchez, a Spanish philosopher, complained in 1581 that 10m years was not long enough to read all the books in existence. The German polymath Gottfried Wilhelm Leibniz grumbled in 1680 of "that horrible mass of books which keeps on growing"."
  2. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.01
    0.010575616 = product of:
      0.052878078 = sum of:
        0.052878078 = weight(_text_:books in 4449) [ClassicSimilarity], result of:
          0.052878078 = score(doc=4449,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.21359414 = fieldWeight in 4449, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=4449)
      0.2 = coord(1/5)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
  3. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.0083279535 = product of:
      0.041639768 = sum of:
        0.041639768 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
          0.041639768 = score(doc=479,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.23214069 = fieldWeight in 479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=479)
      0.2 = coord(1/5)
    
    Date
    23. 1.2022 10:22:18
  4. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.01
    0.00660976 = product of:
      0.0330488 = sum of:
        0.0330488 = weight(_text_:books in 1182) [ClassicSimilarity], result of:
          0.0330488 = score(doc=1182,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.13349634 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1182)
      0.2 = coord(1/5)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.
  5. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.01
    0.005551969 = product of:
      0.027759846 = sum of:
        0.027759846 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
          0.027759846 = score(doc=3335,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.15476047 = fieldWeight in 3335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
      0.2 = coord(1/5)
    

Languages