Search (44 results, page 1 of 3)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"a"
  1. Probst, M.; Mittelbach, J.: Maschinelle Indexierung in der Sacherschließung wissenschaftlicher Bibliotheken (2006) 0.09
    0.08764166 = product of:
      0.26292497 = sum of:
        0.24020076 = weight(_text_:168 in 1755) [ClassicSimilarity], result of:
          0.24020076 = score(doc=1755,freq=4.0), product of:
            0.28385672 = queryWeight, product of:
              6.769634 = idf(docFreq=137, maxDocs=44218)
              0.041930884 = queryNorm
            0.8462042 = fieldWeight in 1755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.769634 = idf(docFreq=137, maxDocs=44218)
              0.0625 = fieldNorm(doc=1755)
        0.022724222 = product of:
          0.045448445 = sum of:
            0.045448445 = weight(_text_:22 in 1755) [ClassicSimilarity], result of:
              0.045448445 = score(doc=1755,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.30952093 = fieldWeight in 1755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1755)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. unter: http://www.bibliothek-saur.de/2006_2/168-176.pdf.
    Date
    22. 3.2008 12:35:19
    Source
    Bibliothek: Forschung und Praxis. 30(2006) H.2, S.168-176
  2. Zimmermann, H.H.: Wortrelationierung in der Sprachtechnik : Stilhilfen, Retrievalhilfen, Übersetzungshilfen (1992) 0.04
    0.042738065 = product of:
      0.2564284 = sum of:
        0.2564284 = weight(_text_:kognitive in 1372) [ClassicSimilarity], result of:
          0.2564284 = score(doc=1372,freq=2.0), product of:
            0.28477833 = queryWeight, product of:
              6.7916126 = idf(docFreq=134, maxDocs=44218)
              0.041930884 = queryNorm
            0.90044916 = fieldWeight in 1372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7916126 = idf(docFreq=134, maxDocs=44218)
              0.09375 = fieldNorm(doc=1372)
      0.16666667 = coord(1/6)
    
    Source
    Kognitive Ansätze zum Ordnen und Darstellen von Wissen. 2. Tagung der Deutschen ISKO Sektion einschl. der Vorträge des Workshops "Thesauri als Werkzeuge der Sprachtechnologie", Weilburg, 15.-18.10.1991
  3. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.04
    0.040393855 = product of:
      0.12118156 = sum of:
        0.08141418 = weight(_text_:b in 262) [ClassicSimilarity], result of:
          0.08141418 = score(doc=262,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.54802394 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.109375 = fieldNorm(doc=262)
        0.03976739 = product of:
          0.07953478 = sum of:
            0.07953478 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.07953478 = score(doc=262,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    20.10.2000 12:22:23
  4. Kutschekmanesch, S.; Lutes, B.; Moelle, K.; Thiel, U.; Tzeras, K.: Automated multilingual indexing : a synthesis of rule-based and thesaurus-based methods (1998) 0.03
    0.028852757 = product of:
      0.08655827 = sum of:
        0.05815299 = weight(_text_:b in 4157) [ClassicSimilarity], result of:
          0.05815299 = score(doc=4157,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.3914457 = fieldWeight in 4157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=4157)
        0.028405279 = product of:
          0.056810558 = sum of:
            0.056810558 = weight(_text_:22 in 4157) [ClassicSimilarity], result of:
              0.056810558 = score(doc=4157,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.38690117 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Information und Märkte: 50. Deutscher Dokumentartag 1998, Kongreß der Deutschen Gesellschaft für Dokumentation e.V. (DGD), Rheinische Friedrich-Wilhelms-Universität Bonn, 22.-24. September 1998. Hrsg. von Marlies Ockenfeld u. Gerhard J. Mantwill
  5. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.01
    0.014752803 = product of:
      0.04425841 = sum of:
        0.0328963 = weight(_text_:b in 1441) [ClassicSimilarity], result of:
          0.0328963 = score(doc=1441,freq=4.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.22143513 = fieldWeight in 1441, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.011362111 = product of:
          0.022724222 = sum of:
            0.022724222 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.022724222 = score(doc=1441,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Gaus, W.; Kaluscha, R.: Maschinelle inhaltliche Erschließung von Arztbriefen und Auswertung von Reha-Entlassungsberichten (2006) 0.01
    0.014153965 = product of:
      0.08492379 = sum of:
        0.08492379 = weight(_text_:168 in 6078) [ClassicSimilarity], result of:
          0.08492379 = score(doc=6078,freq=2.0), product of:
            0.28385672 = queryWeight, product of:
              6.769634 = idf(docFreq=137, maxDocs=44218)
              0.041930884 = queryNorm
            0.29917836 = fieldWeight in 6078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.769634 = idf(docFreq=137, maxDocs=44218)
              0.03125 = fieldNorm(doc=6078)
      0.16666667 = coord(1/6)
    
    Pages
    S.159-168
  7. Thirion, B.; Leroy, J.P.; Baudic, F.; Douyère, M.; Piot, J.; Darmoni, S.J.: SDI selecting, decribing, and indexing : did you mean automatically? (2001) 0.01
    0.0116305975 = product of:
      0.06978358 = sum of:
        0.06978358 = weight(_text_:b in 6198) [ClassicSimilarity], result of:
          0.06978358 = score(doc=6198,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.46973482 = fieldWeight in 6198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.09375 = fieldNorm(doc=6198)
      0.16666667 = coord(1/6)
    
  8. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.01
    0.011541101 = product of:
      0.034623303 = sum of:
        0.023261193 = weight(_text_:b in 5499) [ClassicSimilarity], result of:
          0.023261193 = score(doc=5499,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.15657827 = fieldWeight in 5499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.011362111 = product of:
          0.022724222 = sum of:
            0.022724222 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
              0.022724222 = score(doc=5499,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.15476047 = fieldWeight in 5499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5499)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    20. 1.2015 18:30:22
  9. Wiesenmüller, H.: DNB-Sacherschließung : Neues für die Reihen A und B (2019) 0.01
    0.010072393 = product of:
      0.06043436 = sum of:
        0.06043436 = weight(_text_:b in 5212) [ClassicSimilarity], result of:
          0.06043436 = score(doc=5212,freq=6.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.40680233 = fieldWeight in 5212, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=5212)
      0.16666667 = coord(1/6)
    
    Abstract
    "Alle paar Jahre wird die Bibliothekscommunity mit Veränderungen in der inhaltlichen Erschließung durch die Deutsche Nationalbibliothek konfrontiert. Sicher werden sich viele noch an die Einschnitte des Jahres 2014 für die Reihe A erinnern: Seither werden u.a. Ratgeber, Sprachwörterbücher, Reiseführer und Kochbücher nicht mehr mit Schlagwörtern erschlossen (vgl. das DNB-Konzept von 2014). Das Jahr 2017 brachte die Einführung der maschinellen Indexierung für die Reihen B und H bei gleichzeitigem Verlust der DDC-Tiefenerschließung (vgl. DNB-Informationen von 2017). Virulent war seither die Frage, was mit der Reihe A passieren würde. Seit wenigen Tagen kann man dies nun auf der Website der DNB nachlesen. (Nebenbei: Es ist zu befürchten, dass viele Links in diesem Blog-Beitrag in absehbarer Zeit nicht mehr funktionieren werden, da ein Relaunch der DNB-Website angekündigt ist. Wie beim letzten Mal wird es vermutlich auch diesmal keine Weiterleitungen von den alten auf die neuen URLs geben.)"
    Source
    https://www.basiswissen-rda.de/dnb-sacherschliessung-reihen-a-und-b/
  10. Thönssen, B.: Automatische Indexierung und Schnittstellen zu Thesauri (1988) 0.01
    0.009692165 = product of:
      0.05815299 = sum of:
        0.05815299 = weight(_text_:b in 30) [ClassicSimilarity], result of:
          0.05815299 = score(doc=30,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.3914457 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
      0.16666667 = coord(1/6)
    
  11. Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986) 0.01
    0.009692165 = product of:
      0.05815299 = sum of:
        0.05815299 = weight(_text_:b in 4040) [ClassicSimilarity], result of:
          0.05815299 = score(doc=4040,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.3914457 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=4040)
      0.16666667 = coord(1/6)
    
  12. Gil-Leiva, I.; Munoz, J.V.R.: Analisis de los descriptores de diferentes areas del conocimiento indizades en bases de datos del CSIC : Aplicacion a la indizacion automatica (1997) 0.01
    0.008016243 = product of:
      0.048097454 = sum of:
        0.048097454 = product of:
          0.09619491 = sum of:
            0.09619491 = weight(_text_:psychologie in 2637) [ClassicSimilarity], result of:
              0.09619491 = score(doc=2637,freq=2.0), product of:
                0.24666919 = queryWeight, product of:
                  5.8827567 = idf(docFreq=334, maxDocs=44218)
                  0.041930884 = queryNorm
                0.38997537 = fieldWeight in 2637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.8827567 = idf(docFreq=334, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2637)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Field
    Psychologie
  13. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.007574741 = product of:
      0.045448445 = sum of:
        0.045448445 = product of:
          0.09089689 = sum of:
            0.09089689 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.09089689 = score(doc=402,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  14. Krutulis, J.D.; Jacob, E.K.: ¬A theoretical model for the study of emergent structure in adaptive information networks (1995) 0.01
    0.006784515 = product of:
      0.04070709 = sum of:
        0.04070709 = weight(_text_:b in 3353) [ClassicSimilarity], result of:
          0.04070709 = score(doc=3353,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.27401197 = fieldWeight in 3353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3353)
      0.16666667 = coord(1/6)
    
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
  15. Siebenkäs, A.; Markscheffel, B.: Conception of a workflow for the semi-automatic construction of a thesaurus for the German printing industry (2015) 0.01
    0.006784515 = product of:
      0.04070709 = sum of:
        0.04070709 = weight(_text_:b in 2091) [ClassicSimilarity], result of:
          0.04070709 = score(doc=2091,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.27401197 = fieldWeight in 2091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2091)
      0.16666667 = coord(1/6)
    
  16. Wiesenmüller, H.: Maschinelle Indexierung am Beispiel der DNB : Analyse und Entwicklungmöglichkeiten (2018) 0.01
    0.006784515 = product of:
      0.04070709 = sum of:
        0.04070709 = weight(_text_:b in 5209) [ClassicSimilarity], result of:
          0.04070709 = score(doc=5209,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.27401197 = fieldWeight in 5209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5209)
      0.16666667 = coord(1/6)
    
    Abstract
    Der Beitrag untersucht die Ergebnisse des bei der Deutschen Nationalbibliothek (DNB) eingesetzten Verfahrens zur automatischen Vergabe von Schlagwörtern. Seit 2017 kommt dieses auch bei Printausgaben der Reihen B und H der Deutschen Nationalbibliografie zum Einsatz. Die zentralen Problembereiche werden dargestellt und an Beispielen illustriert - beispielsweise dass nicht alle im Inhaltsverzeichnis vorkommenden Wörter tatsächlich thematische Aspekte ausdrücken und dass die Software sehr häufig Körperschaften und andere "Named entities" nicht erkennt. Die maschinell generierten Ergebnisse sind derzeit sehr unbefriedigend. Es werden Überlegungen für mögliche Verbesserungen und sinnvolle Strategien angestellt.
  17. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.01
    0.006627898 = product of:
      0.03976739 = sum of:
        0.03976739 = product of:
          0.07953478 = sum of:
            0.07953478 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.07953478 = score(doc=6265,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  18. Cui, H.; Boufford, D.; Selden, P.: Semantic annotation of biosystematics literature without training examples (2010) 0.01
    0.0058152988 = product of:
      0.03489179 = sum of:
        0.03489179 = weight(_text_:b in 3422) [ClassicSimilarity], result of:
          0.03489179 = score(doc=3422,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.23486741 = fieldWeight in 3422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=3422)
      0.16666667 = coord(1/6)
    
    Abstract
    This article presents an unsupervised algorithm for semantic annotation of morphological descriptions of whole organisms. The algorithm is able to annotate plain text descriptions with high accuracy at the clause level by exploiting the corpus itself. In other words, the algorithm does not need lexicons, syntactic parsers, training examples, or annotation templates. The evaluation on two real-life description collections in botany and paleontology shows that the algorithm has the following desirable features: (a) reduces/eliminates manual labor required to compile dictionaries and prepare source documents; (b) improves annotation coverage: the algorithm annotates what appears in documents and is not limited by predefined and often incomplete templates; (c) learns clean and reusable concepts: the algorithm learns organ names and character states that can be used to construct reusable domain lexicons, as opposed to collection-dependent patterns whose applicability is often limited to a particular collection; (d) insensitive to collection size; and (e) runs in linear time with respect to the number of clauses to be annotated.
  19. Kiros, R.; Salakhutdinov, R.; Zemel, R.S.: Unifying visual-semantic embeddings with multimodal neural language models (2014) 0.01
    0.0058152988 = product of:
      0.03489179 = sum of:
        0.03489179 = weight(_text_:b in 1871) [ClassicSimilarity], result of:
          0.03489179 = score(doc=1871,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.23486741 = fieldWeight in 1871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=1871)
      0.16666667 = coord(1/6)
    
    Abstract
    Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - "blue" + "red" is near images of red cars. Sample captions generated for 800 images are made available for comparison.
  20. Mielke, B.: Wider einige gängige Ansichten zur juristischen Informationserschließung (2002) 0.01
    0.0058152988 = product of:
      0.03489179 = sum of:
        0.03489179 = weight(_text_:b in 2145) [ClassicSimilarity], result of:
          0.03489179 = score(doc=2145,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.23486741 = fieldWeight in 2145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=2145)
      0.16666667 = coord(1/6)
    

Years

Languages

  • e 22
  • d 20
  • ru 1
  • sp 1
  • More… Less…