Search (4 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"el"
  1. Kiros, R.; Salakhutdinov, R.; Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models (2014) 0.01
    0.009618433 = product of:
      0.019236866 = sum of:
        0.019236866 = product of:
          0.038473733 = sum of:
            0.038473733 = weight(_text_:r in 1871) [ClassicSimilarity], result of:
              0.038473733 = score(doc=1871,freq=4.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.3103367 = fieldWeight in 1871, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Beckmann, R.; Hinrichs, I.; Janßen, M.; Milmeister, G.; Schäuble, P.: ¬Der Digitale Assistent DA-3 : Eine Plattform für die Inhaltserschließung (2019) 0.01
    0.0068012597 = product of:
      0.013602519 = sum of:
        0.013602519 = product of:
          0.027205039 = sum of:
            0.027205039 = weight(_text_:r in 5408) [ClassicSimilarity], result of:
              0.027205039 = score(doc=5408,freq=2.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.2194412 = fieldWeight in 5408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5408)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.01
    0.006342702 = product of:
      0.012685404 = sum of:
        0.012685404 = product of:
          0.025370808 = sum of:
            0.025370808 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.025370808 = score(doc=3780,freq=2.0), product of:
                0.13114879 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037451506 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 8.2017 9:24:22
  4. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    0.0028338581 = product of:
      0.0056677163 = sum of:
        0.0056677163 = product of:
          0.0113354325 = sum of:
            0.0113354325 = weight(_text_:r in 1875) [ClassicSimilarity], result of:
              0.0113354325 = score(doc=1875,freq=2.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.09143383 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Computer vision specialists said that despite the improvements, these software systems had made only limited progress toward the goal of digitally duplicating human vision and, even more elusive, understanding. "I don't know that I would say this is 'understanding' in the sense we want," said John R. Smith, a senior manager at I.B.M.'s T.J. Watson Research Center in Yorktown Heights, N.Y. "I think even the ability to generate language here is very limited." But the Google and Stanford teams said that they expect to see significant increases in accuracy as they improve their software and train these programs with larger sets of annotated images. A research group led by Tamara L. Berg, a computer scientist at the University of North Carolina at Chapel Hill, is training a neural network with one million images annotated by humans. "You're trying to tell the story behind the image," she said. "A natural scene will be very complex, and you want to pick out the most important objects in the image.""