Search (8 results, page 1 of 1)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"el"
  1. Wiesenmüller, H.: Maschinelle Indexierung am Beispiel der DNB : Analyse und Entwicklungmöglichkeiten (2018) 0.00
    0.0022197026 = product of:
      0.0066591077 = sum of:
        0.0066591077 = product of:
          0.03329554 = sum of:
            0.03329554 = weight(_text_:28 in 5209) [ClassicSimilarity], result of:
              0.03329554 = score(doc=5209,freq=2.0), product of:
                0.12017813 = queryWeight, product of:
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0335481 = queryNorm
                0.27705154 = fieldWeight in 5209, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5209)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Date
    13.12.2018 13:34:28
  2. Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014) 0.00
    0.0019026021 = product of:
      0.0057078064 = sum of:
        0.0057078064 = product of:
          0.028539032 = sum of:
            0.028539032 = weight(_text_:28 in 1557) [ClassicSimilarity], result of:
              0.028539032 = score(doc=1557,freq=2.0), product of:
                0.12017813 = queryWeight, product of:
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0335481 = queryNorm
                0.23747274 = fieldWeight in 1557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1557)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
  3. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.00
    0.0018346256 = product of:
      0.0055038766 = sum of:
        0.0055038766 = product of:
          0.027519383 = sum of:
            0.027519383 = weight(_text_:29 in 1055) [ClassicSimilarity], result of:
              0.027519383 = score(doc=1055,freq=2.0), product of:
                0.118011735 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0335481 = queryNorm
                0.23319192 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Date
    12. 9.2013 12:29:05
  4. Banerjee, K.; Johnson, M.: Improving access to archival collections with automated entity extraction (2015) 0.00
    0.0018346256 = product of:
      0.0055038766 = sum of:
        0.0055038766 = product of:
          0.027519383 = sum of:
            0.027519383 = weight(_text_:29 in 2144) [ClassicSimilarity], result of:
              0.027519383 = score(doc=2144,freq=2.0), product of:
                0.118011735 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0335481 = queryNorm
                0.23319192 = fieldWeight in 2144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2144)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]
  5. Vinyals, O.; Toshev, A.; Bengio, S.; Erhan, D.: Show and tell : a neural image caption generator (2014) 0.00
    0.001585502 = product of:
      0.0047565056 = sum of:
        0.0047565056 = product of:
          0.023782527 = sum of:
            0.023782527 = weight(_text_:28 in 1869) [ClassicSimilarity], result of:
              0.023782527 = score(doc=1869,freq=2.0), product of:
                0.12017813 = queryWeight, product of:
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0335481 = queryNorm
                0.19789396 = fieldWeight in 1869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1869)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.
  6. Husevag, A.-S.R.: Named entities in indexing : a case study of TV subtitles and metadata records (2016) 0.00
    0.001585502 = product of:
      0.0047565056 = sum of:
        0.0047565056 = product of:
          0.023782527 = sum of:
            0.023782527 = weight(_text_:28 in 3105) [ClassicSimilarity], result of:
              0.023782527 = score(doc=3105,freq=2.0), product of:
                0.12017813 = queryWeight, product of:
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0335481 = queryNorm
                0.19789396 = fieldWeight in 3105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3105)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Date
    16. 9.2016 19:00:28
  7. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.00
    0.0015151015 = product of:
      0.0045453045 = sum of:
        0.0045453045 = product of:
          0.022726523 = sum of:
            0.022726523 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.022726523 = score(doc=3780,freq=2.0), product of:
                0.117479734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0335481 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Date
    19. 8.2017 9:24:22
  8. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.00
    0.0012684014 = product of:
      0.003805204 = sum of:
        0.003805204 = product of:
          0.01902602 = sum of:
            0.01902602 = weight(_text_:28 in 3081) [ClassicSimilarity], result of:
              0.01902602 = score(doc=3081,freq=2.0), product of:
                0.12017813 = queryWeight, product of:
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.0335481 = queryNorm
                0.15831517 = fieldWeight in 3081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5822632 = idf(docFreq=3342, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
          0.2 = coord(1/5)
      0.33333334 = coord(1/3)
    
    Date
    24. 8.2016 13:45:28