Search (33 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Automatisches Indexieren"
  1. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.04
    0.036874868 = product of:
      0.073749736 = sum of:
        0.073749736 = product of:
          0.1106246 = sum of:
            0.037039213 = weight(_text_:h in 5629) [ClassicSimilarity], result of:
              0.037039213 = score(doc=5629,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.32939452 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
            0.07358538 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.07358538 = score(doc=5629,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  2. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.04
    0.035266254 = product of:
      0.07053251 = sum of:
        0.07053251 = sum of:
          0.033657644 = weight(_text_:c in 1441) [ClassicSimilarity], result of:
            0.033657644 = score(doc=1441,freq=4.0), product of:
              0.15612034 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.045260075 = queryNorm
              0.21558782 = fieldWeight in 1441, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
          0.012346405 = weight(_text_:h in 1441) [ClassicSimilarity], result of:
            0.012346405 = score(doc=1441,freq=2.0), product of:
              0.11244635 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.045260075 = queryNorm
              0.10979818 = fieldWeight in 1441, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
          0.02452846 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
            0.02452846 = score(doc=1441,freq=2.0), product of:
              0.15849307 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045260075 = queryNorm
              0.15476047 = fieldWeight in 1441, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  3. Schöning-Walter, C.: Automatische Erschließungsverfahren für Netzpublikationen : zum Stand der Arbeiten im Projekt PETRUS (2011) 0.02
    0.024097301 = product of:
      0.048194602 = sum of:
        0.048194602 = product of:
          0.0722919 = sum of:
            0.047599096 = weight(_text_:c in 1714) [ClassicSimilarity], result of:
              0.047599096 = score(doc=1714,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.3048872 = fieldWeight in 1714, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1714)
            0.02469281 = weight(_text_:h in 1714) [ClassicSimilarity], result of:
              0.02469281 = score(doc=1714,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.21959636 = fieldWeight in 1714, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1714)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Dialog mit Bibliotheken. 23(2011) H.1, S.31-36
  4. Kasprzik, A.: Voraussetzungen und Anwendungspotentiale einer präzisen Sacherschließung aus Sicht der Wissenschaft (2018) 0.02
    0.021510338 = product of:
      0.043020677 = sum of:
        0.043020677 = product of:
          0.06453101 = sum of:
            0.021606207 = weight(_text_:h in 5195) [ClassicSimilarity], result of:
              0.021606207 = score(doc=5195,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.19214681 = fieldWeight in 5195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5195)
            0.042924806 = weight(_text_:22 in 5195) [ClassicSimilarity], result of:
              0.042924806 = score(doc=5195,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.2708308 = fieldWeight in 5195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5195)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Große Aufmerksamkeit richtet sich im Moment auf das Potential von automatisierten Methoden in der Sacherschließung und deren Interaktionsmöglichkeiten mit intellektuellen Methoden. In diesem Kontext befasst sich der vorliegende Beitrag mit den folgenden Fragen: Was sind die Anforderungen an bibliothekarische Metadaten aus Sicht der Wissenschaft? Was wird gebraucht, um den Informationsbedarf der Fachcommunities zu bedienen? Und was bedeutet das entsprechend für die Automatisierung der Metadatenerstellung und -pflege? Dieser Beitrag fasst die von der Autorin eingenommene Position in einem Impulsvortrag und der Podiumsdiskussion beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
    Source
    ABI-Technik. 38(2018) H.4, S.332-335
  5. Franke-Maier, M.: Anforderungen an die Qualität der Inhaltserschließung im Spannungsfeld von intellektuell und automatisch erzeugten Metadaten (2018) 0.02
    0.021510338 = product of:
      0.043020677 = sum of:
        0.043020677 = product of:
          0.06453101 = sum of:
            0.021606207 = weight(_text_:h in 5344) [ClassicSimilarity], result of:
              0.021606207 = score(doc=5344,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.19214681 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
            0.042924806 = weight(_text_:22 in 5344) [ClassicSimilarity], result of:
              0.042924806 = score(doc=5344,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.2708308 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Spätestens seit dem Deutschen Bibliothekartag 2018 hat sich die Diskussion zu den automatischen Verfahren der Inhaltserschließung der Deutschen Nationalbibliothek von einer politisch geführten Diskussion in eine Qualitätsdiskussion verwandelt. Der folgende Beitrag beschäftigt sich mit Fragen der Qualität von Inhaltserschließung in digitalen Zeiten, wo heterogene Erzeugnisse unterschiedlicher Verfahren aufeinandertreffen und versucht, wichtige Anforderungen an Qualität zu definieren. Dieser Tagungsbeitrag fasst die vom Autor als Impulse vorgetragenen Ideen beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV am 29. August 2018 in Kiel zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
    Source
    ABI-Technik. 38(2018) H.4, S.327-331
  6. Busch, D.: Domänenspezifische hybride automatische Indexierung von bibliographischen Metadaten (2019) 0.02
    0.018437434 = product of:
      0.036874868 = sum of:
        0.036874868 = product of:
          0.0553123 = sum of:
            0.018519606 = weight(_text_:h in 5628) [ClassicSimilarity], result of:
              0.018519606 = score(doc=5628,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.16469726 = fieldWeight in 5628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5628)
            0.03679269 = weight(_text_:22 in 5628) [ClassicSimilarity], result of:
              0.03679269 = score(doc=5628,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.23214069 = fieldWeight in 5628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5628)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.6, S.465-469
  7. Cui, H.; Boufford, D.; Selden, P.: Semantic annotation of biosystematics literature without training examples (2010) 0.02
    0.018072978 = product of:
      0.036145955 = sum of:
        0.036145955 = product of:
          0.05421893 = sum of:
            0.035699323 = weight(_text_:c in 3422) [ClassicSimilarity], result of:
              0.035699323 = score(doc=3422,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 3422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3422)
            0.018519606 = weight(_text_:h in 3422) [ClassicSimilarity], result of:
              0.018519606 = score(doc=3422,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.16469726 = fieldWeight in 3422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3422)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This article presents an unsupervised algorithm for semantic annotation of morphological descriptions of whole organisms. The algorithm is able to annotate plain text descriptions with high accuracy at the clause level by exploiting the corpus itself. In other words, the algorithm does not need lexicons, syntactic parsers, training examples, or annotation templates. The evaluation on two real-life description collections in botany and paleontology shows that the algorithm has the following desirable features: (a) reduces/eliminates manual labor required to compile dictionaries and prepare source documents; (b) improves annotation coverage: the algorithm annotates what appears in documents and is not limited by predefined and often incomplete templates; (c) learns clean and reusable concepts: the algorithm learns organ names and character states that can be used to construct reusable domain lexicons, as opposed to collection-dependent patterns whose applicability is often limited to a particular collection; (d) insensitive to collection size; and (e) runs in linear time with respect to the number of clauses to be annotated.
  8. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.01
    0.010220192 = product of:
      0.020440385 = sum of:
        0.020440385 = product of:
          0.061321154 = sum of:
            0.061321154 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
              0.061321154 = score(doc=2759,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.38690117 = fieldWeight in 2759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2759)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  9. Siebenkäs, A.; Markscheffel, B.: Conception of a workflow for the semi-automatic construction of a thesaurus for the German printing industry (2015) 0.01
    0.009816813 = product of:
      0.019633627 = sum of:
        0.019633627 = product of:
          0.058900878 = sum of:
            0.058900878 = weight(_text_:c in 2091) [ClassicSimilarity], result of:
              0.058900878 = score(doc=2091,freq=4.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.3772787 = fieldWeight in 2091, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2091)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  10. Glaesener, L.: Automatisches Indexieren einer informationswissenschaftlichen Datenbank mit Mehrwortgruppen (2012) 0.01
    0.0081761535 = product of:
      0.016352307 = sum of:
        0.016352307 = product of:
          0.04905692 = sum of:
            0.04905692 = weight(_text_:22 in 401) [ClassicSimilarity], result of:
              0.04905692 = score(doc=401,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.30952093 = fieldWeight in 401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    11. 9.2012 19:43:22
  11. Beyer, C.; Trunk, D.: Automatische Verfahren für die Formalerschließung im Projekt PETRUS (2011) 0.01
    0.007933183 = product of:
      0.015866365 = sum of:
        0.015866365 = product of:
          0.047599096 = sum of:
            0.047599096 = weight(_text_:c in 1712) [ClassicSimilarity], result of:
              0.047599096 = score(doc=1712,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.3048872 = fieldWeight in 1712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1712)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Böhm, A.; Seifert, C.; Schlötterer, J.; Granitzer, M.: Identifying tweets from the economic domain (2017) 0.01
    0.0069415346 = product of:
      0.013883069 = sum of:
        0.013883069 = product of:
          0.041649207 = sum of:
            0.041649207 = weight(_text_:c in 3495) [ClassicSimilarity], result of:
              0.041649207 = score(doc=3495,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.2667763 = fieldWeight in 3495, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3495)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 1055) [ClassicSimilarity], result of:
              0.035699323 = score(doc=1055,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://at.yorku.ca/c/b/f/j/99.htm
  14. Mödden, E.: Inhaltserschließung im Zeitalter von Suchmaschinen und Volltextsuche (2018) 0.01
    0.0051443353 = product of:
      0.010288671 = sum of:
        0.010288671 = product of:
          0.030866012 = sum of:
            0.030866012 = weight(_text_:h in 5625) [ClassicSimilarity], result of:
              0.030866012 = score(doc=5625,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27449545 = fieldWeight in 5625, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5625)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 21(2018) H.1, S.47-51
  15. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.01
    0.005110096 = product of:
      0.010220192 = sum of:
        0.010220192 = product of:
          0.030660577 = sum of:
            0.030660577 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.030660577 = score(doc=3780,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    19. 8.2017 9:24:22
  16. Wiesenmüller, H.: Maschinelle Indexierung am Beispiel der DNB : Analyse und Entwicklungmöglichkeiten (2018) 0.01
    0.005092632 = product of:
      0.010185264 = sum of:
        0.010185264 = product of:
          0.030555792 = sum of:
            0.030555792 = weight(_text_:h in 5209) [ClassicSimilarity], result of:
              0.030555792 = score(doc=5209,freq=4.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27173662 = fieldWeight in 5209, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5209)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Der Beitrag untersucht die Ergebnisse des bei der Deutschen Nationalbibliothek (DNB) eingesetzten Verfahrens zur automatischen Vergabe von Schlagwörtern. Seit 2017 kommt dieses auch bei Printausgaben der Reihen B und H der Deutschen Nationalbibliografie zum Einsatz. Die zentralen Problembereiche werden dargestellt und an Beispielen illustriert - beispielsweise dass nicht alle im Inhaltsverzeichnis vorkommenden Wörter tatsächlich thematische Aspekte ausdrücken und dass die Software sehr häufig Körperschaften und andere "Named entities" nicht erkennt. Die maschinell generierten Ergebnisse sind derzeit sehr unbefriedigend. Es werden Überlegungen für mögliche Verbesserungen und sinnvolle Strategien angestellt.
  17. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 2161) [ClassicSimilarity], result of:
              0.029749434 = score(doc=2161,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2161)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 5045) [ClassicSimilarity], result of:
              0.029749434 = score(doc=5045,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Wiesenmüller, H.: DNB-Sacherschließung : Neues für die Reihen A und B (2019) 0.00
    0.004365113 = product of:
      0.008730226 = sum of:
        0.008730226 = product of:
          0.026190678 = sum of:
            0.026190678 = weight(_text_:h in 5212) [ClassicSimilarity], result of:
              0.026190678 = score(doc=5212,freq=4.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.2329171 = fieldWeight in 5212, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5212)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    "Alle paar Jahre wird die Bibliothekscommunity mit Veränderungen in der inhaltlichen Erschließung durch die Deutsche Nationalbibliothek konfrontiert. Sicher werden sich viele noch an die Einschnitte des Jahres 2014 für die Reihe A erinnern: Seither werden u.a. Ratgeber, Sprachwörterbücher, Reiseführer und Kochbücher nicht mehr mit Schlagwörtern erschlossen (vgl. das DNB-Konzept von 2014). Das Jahr 2017 brachte die Einführung der maschinellen Indexierung für die Reihen B und H bei gleichzeitigem Verlust der DDC-Tiefenerschließung (vgl. DNB-Informationen von 2017). Virulent war seither die Frage, was mit der Reihe A passieren würde. Seit wenigen Tagen kann man dies nun auf der Website der DNB nachlesen. (Nebenbei: Es ist zu befürchten, dass viele Links in diesem Blog-Beitrag in absehbarer Zeit nicht mehr funktionieren werden, da ein Relaunch der DNB-Website angekündigt ist. Wie beim letzten Mal wird es vermutlich auch diesmal keine Weiterleitungen von den alten auf die neuen URLs geben.)"
  20. Bredack, J.; Lepsky, K.: Automatische Extraktion von Fachterminologie aus Volltexten (2014) 0.00
    0.0041154684 = product of:
      0.008230937 = sum of:
        0.008230937 = product of:
          0.02469281 = sum of:
            0.02469281 = weight(_text_:h in 4872) [ClassicSimilarity], result of:
              0.02469281 = score(doc=4872,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.21959636 = fieldWeight in 4872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4872)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    ABI-Technik. 34(2014) H.1, S.2-8