Search (544 results, page 28 of 28)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.00
    4.917977E-4 = product of:
      0.0014753931 = sum of:
        0.0014753931 = weight(_text_:s in 1219) [ClassicSimilarity], result of:
          0.0014753931 = score(doc=1219,freq=2.0), product of:
            0.049129035 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045187026 = queryNorm
            0.030030979 = fieldWeight in 1219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1219)
      0.33333334 = coord(1/3)
    
    Source
    D-Lib magazine. 7(2001) no.4, xx S
  2. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    4.917977E-4 = product of:
      0.0014753931 = sum of:
        0.0014753931 = weight(_text_:s in 1875) [ClassicSimilarity], result of:
          0.0014753931 = score(doc=1875,freq=2.0), product of:
            0.049129035 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045187026 = queryNorm
            0.030030979 = fieldWeight in 1875, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1875)
      0.33333334 = coord(1/3)
    
    Content
    Computer vision specialists said that despite the improvements, these software systems had made only limited progress toward the goal of digitally duplicating human vision and, even more elusive, understanding. "I don't know that I would say this is 'understanding' in the sense we want," said John R. Smith, a senior manager at I.B.M.'s T.J. Watson Research Center in Yorktown Heights, N.Y. "I think even the ability to generate language here is very limited." But the Google and Stanford teams said that they expect to see significant increases in accuracy as they improve their software and train these programs with larger sets of annotated images. A research group led by Tamara L. Berg, a computer scientist at the University of North Carolina at Chapel Hill, is training a neural network with one million images annotated by humans. "You're trying to tell the story behind the image," she said. "A natural scene will be very complex, and you want to pick out the most important objects in the image.""
  3. Hawking, S.: This is the most dangerous time for our planet (2016) 0.00
    4.917977E-4 = product of:
      0.0014753931 = sum of:
        0.0014753931 = weight(_text_:s in 3273) [ClassicSimilarity], result of:
          0.0014753931 = score(doc=3273,freq=2.0), product of:
            0.049129035 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045187026 = queryNorm
            0.030030979 = fieldWeight in 3273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3273)
      0.33333334 = coord(1/3)
    
  4. Dodge, M.: ¬A map of Yahoo! (2000) 0.00
    3.9343812E-4 = product of:
      0.0011803143 = sum of:
        0.0011803143 = weight(_text_:s in 1555) [ClassicSimilarity], result of:
          0.0011803143 = score(doc=1555,freq=2.0), product of:
            0.049129035 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045187026 = queryNorm
            0.024024783 = fieldWeight in 1555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
      0.33333334 = coord(1/3)
    
    Content
    "Introduction Yahoo! is the undisputed king of the Web directories, providing one of the key information navigation tools on the Internet. It has maintained its popularity over many Internet-years as the most visited Web site, against intense competition. This is because it does a good job of shifting, cataloguing and organising the Web [1] . But what would a map of Yahoo!'s hierarchical classification of the Web look like? Would an interactive map of Yahoo!, rather than the conventional listing of sites, be more useful as navigational tool? We can get some idea what a map of Yahoo! might be like by taking a look at ET-Map, a prototype developed by Hsinchun Chen and colleagues in the Artificial Intelligence Lab [2] at the University of Arizona. ET-Map was developed in 1995 as part of innovative research in automatic Internet homepage categorization and it charts a large chunk of Yahoo!, from the entertainment section representing some 110,000 different Web links. The map is a two-dimensional, multi-layered category map; its aim is to provide an intuitive visual information browsing tool. ET-Map can be browsed interactively, explored and queried, using the familiar point-and-click navigation style of the Web to find information of interest.

Types

  • a 296
  • i 38
  • s 17
  • m 12
  • n 11
  • x 10
  • p 8
  • r 8
  • b 6
  • More… Less…

Themes

Classifications