Search (8 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"el"
  1. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description (2014) 0.00
    0.0010635054 = product of:
      0.0021270108 = sum of:
        0.0021270108 = product of:
          0.0042540217 = sum of:
            0.0042540217 = weight(_text_:s in 1873) [ClassicSimilarity], result of:
              0.0042540217 = score(doc=1873,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.08494043 = fieldWeight in 1873, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1873)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.00
    0.0010528166 = product of:
      0.0021056333 = sum of:
        0.0021056333 = product of:
          0.0042112665 = sum of:
            0.0042112665 = weight(_text_:s in 1717) [ClassicSimilarity], result of:
              0.0042112665 = score(doc=1717,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.08408674 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloguing & Classification Quarterly 52(2014) no.1, S.102-109
  3. Mongin, L.; Fu, Y.Y.; Mostafa, J.: Open Archives data Service prototype and automated subject indexing using D-Lib archive content as a testbed (2003) 0.00
    9.0241426E-4 = product of:
      0.0018048285 = sum of:
        0.0018048285 = product of:
          0.003609657 = sum of:
            0.003609657 = weight(_text_:s in 1167) [ClassicSimilarity], result of:
              0.003609657 = score(doc=1167,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.072074346 = fieldWeight in 1167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1167)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    D-Lib magazine. 9(2003) no.12, x S
  4. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D.: Show and tell : a neural image caption generator (2014) 0.00
    7.520119E-4 = product of:
      0.0015040238 = sum of:
        0.0015040238 = product of:
          0.0030080476 = sum of:
            0.0030080476 = weight(_text_:s in 1869) [ClassicSimilarity], result of:
              0.0030080476 = score(doc=1869,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.060061958 = fieldWeight in 1869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Husevag, A.-S.R.: Named entities in indexing : a case study of TV subtitles and metadata records (2016) 0.00
    7.520119E-4 = product of:
      0.0015040238 = sum of:
        0.0015040238 = product of:
          0.0030080476 = sum of:
            0.0030080476 = weight(_text_:s in 3105) [ClassicSimilarity], result of:
              0.0030080476 = score(doc=3105,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.060061958 = fieldWeight in 3105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3105)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.48-58
  6. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 3081) [ClassicSimilarity], result of:
              0.002406438 = score(doc=3081,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 3081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    128 S
  7. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D.: ¬A picture is worth a thousand (coherent) words : building a natural description of images (2014) 0.00
    5.264083E-4 = product of:
      0.0010528166 = sum of:
        0.0010528166 = product of:
          0.0021056333 = sum of:
            0.0021056333 = weight(_text_:s in 1874) [ClassicSimilarity], result of:
              0.0021056333 = score(doc=1874,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04204337 = fieldWeight in 1874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1874)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    3.7600595E-4 = product of:
      7.520119E-4 = sum of:
        7.520119E-4 = product of:
          0.0015040238 = sum of:
            0.0015040238 = weight(_text_:s in 1875) [ClassicSimilarity], result of:
              0.0015040238 = score(doc=1875,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.030030979 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Computer vision specialists said that despite the improvements, these software systems had made only limited progress toward the goal of digitally duplicating human vision and, even more elusive, understanding. "I don't know that I would say this is 'understanding' in the sense we want," said John R. Smith, a senior manager at I.B.M.'s T.J. Watson Research Center in Yorktown Heights, N.Y. "I think even the ability to generate language here is very limited." But the Google and Stanford teams said that they expect to see significant increases in accuracy as they improve their software and train these programs with larger sets of annotated images. A research group led by Tamara L. Berg, a computer scientist at the University of North Carolina at Chapel Hill, is training a neural network with one million images annotated by humans. "You're trying to tell the story behind the image," she said. "A natural scene will be very complex, and you want to pick out the most important objects in the image.""