Search (19 results, page 1 of 1)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Automatisches Indexieren"
  1. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.02
    0.024057863 = product of:
      0.048115727 = sum of:
        0.048115727 = sum of:
          0.027832452 = weight(_text_:c in 1441) [ClassicSimilarity], result of:
            0.027832452 = score(doc=1441,freq=4.0), product of:
              0.1291003 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.037426826 = queryNorm
              0.21558782 = fieldWeight in 1441, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
          0.020283274 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
            0.020283274 = score(doc=1441,freq=2.0), product of:
              0.13106237 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037426826 = queryNorm
              0.15476047 = fieldWeight in 1441, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.02
    0.015212455 = product of:
      0.03042491 = sum of:
        0.03042491 = product of:
          0.06084982 = sum of:
            0.06084982 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.06084982 = score(doc=5629,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  3. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.01
    0.0126770465 = product of:
      0.025354093 = sum of:
        0.025354093 = product of:
          0.050708186 = sum of:
            0.050708186 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
              0.050708186 = score(doc=2759,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.38690117 = fieldWeight in 2759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  4. Siebenkäs, A.; Markscheffel, B.: Conception of a workflow for the semi-automatic construction of a thesaurus for the German printing industry (2015) 0.01
    0.012176697 = product of:
      0.024353394 = sum of:
        0.024353394 = product of:
          0.04870679 = sum of:
            0.04870679 = weight(_text_:c in 2091) [ClassicSimilarity], result of:
              0.04870679 = score(doc=2091,freq=4.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.3772787 = fieldWeight in 2091, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  5. Glaesener, L.: Automatisches Indexieren einer informationswissenschaftlichen Datenbank mit Mehrwortgruppen (2012) 0.01
    0.010141637 = product of:
      0.020283274 = sum of:
        0.020283274 = product of:
          0.04056655 = sum of:
            0.04056655 = weight(_text_:22 in 401) [ClassicSimilarity], result of:
              0.04056655 = score(doc=401,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.30952093 = fieldWeight in 401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 9.2012 19:43:22
  6. Beyer, C.; Trunk, D.: Automatische Verfahren für die Formalerschließung im Projekt PETRUS (2011) 0.01
    0.009840257 = product of:
      0.019680513 = sum of:
        0.019680513 = product of:
          0.039361026 = sum of:
            0.039361026 = weight(_text_:c in 1712) [ClassicSimilarity], result of:
              0.039361026 = score(doc=1712,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.3048872 = fieldWeight in 1712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1712)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Schöning-Walter, C.: Automatische Erschließungsverfahren für Netzpublikationen : zum Stand der Arbeiten im Projekt PETRUS (2011) 0.01
    0.009840257 = product of:
      0.019680513 = sum of:
        0.019680513 = product of:
          0.039361026 = sum of:
            0.039361026 = weight(_text_:c in 1714) [ClassicSimilarity], result of:
              0.039361026 = score(doc=1714,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.3048872 = fieldWeight in 1714, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1714)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Kasprzik, A.: Voraussetzungen und Anwendungspotentiale einer präzisen Sacherschließung aus Sicht der Wissenschaft (2018) 0.01
    0.008873932 = product of:
      0.017747864 = sum of:
        0.017747864 = product of:
          0.03549573 = sum of:
            0.03549573 = weight(_text_:22 in 5195) [ClassicSimilarity], result of:
              0.03549573 = score(doc=5195,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.2708308 = fieldWeight in 5195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5195)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Große Aufmerksamkeit richtet sich im Moment auf das Potential von automatisierten Methoden in der Sacherschließung und deren Interaktionsmöglichkeiten mit intellektuellen Methoden. In diesem Kontext befasst sich der vorliegende Beitrag mit den folgenden Fragen: Was sind die Anforderungen an bibliothekarische Metadaten aus Sicht der Wissenschaft? Was wird gebraucht, um den Informationsbedarf der Fachcommunities zu bedienen? Und was bedeutet das entsprechend für die Automatisierung der Metadatenerstellung und -pflege? Dieser Beitrag fasst die von der Autorin eingenommene Position in einem Impulsvortrag und der Podiumsdiskussion beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
  9. Franke-Maier, M.: Anforderungen an die Qualität der Inhaltserschließung im Spannungsfeld von intellektuell und automatisch erzeugten Metadaten (2018) 0.01
    0.008873932 = product of:
      0.017747864 = sum of:
        0.017747864 = product of:
          0.03549573 = sum of:
            0.03549573 = weight(_text_:22 in 5344) [ClassicSimilarity], result of:
              0.03549573 = score(doc=5344,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.2708308 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Spätestens seit dem Deutschen Bibliothekartag 2018 hat sich die Diskussion zu den automatischen Verfahren der Inhaltserschließung der Deutschen Nationalbibliothek von einer politisch geführten Diskussion in eine Qualitätsdiskussion verwandelt. Der folgende Beitrag beschäftigt sich mit Fragen der Qualität von Inhaltserschließung in digitalen Zeiten, wo heterogene Erzeugnisse unterschiedlicher Verfahren aufeinandertreffen und versucht, wichtige Anforderungen an Qualität zu definieren. Dieser Tagungsbeitrag fasst die vom Autor als Impulse vorgetragenen Ideen beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV am 29. August 2018 in Kiel zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
  10. Böhm, A.; Seifert, C.; Schlötterer, J.; Granitzer, M.: Identifying tweets from the economic domain (2017) 0.01
    0.008610224 = product of:
      0.017220449 = sum of:
        0.017220449 = product of:
          0.034440897 = sum of:
            0.034440897 = weight(_text_:c in 3495) [ClassicSimilarity], result of:
              0.034440897 = score(doc=3495,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.2667763 = fieldWeight in 3495, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3495)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Busch, D.: Domänenspezifische hybride automatische Indexierung von bibliographischen Metadaten (2019) 0.01
    0.0076062274 = product of:
      0.015212455 = sum of:
        0.015212455 = product of:
          0.03042491 = sum of:
            0.03042491 = weight(_text_:22 in 5628) [ClassicSimilarity], result of:
              0.03042491 = score(doc=5628,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.23214069 = fieldWeight in 5628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5628)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.6, S.465-469
  12. Cui, H.; Boufford, D.; Selden, P.: Semantic annotation of biosystematics literature without training examples (2010) 0.01
    0.007380193 = product of:
      0.014760386 = sum of:
        0.014760386 = product of:
          0.029520772 = sum of:
            0.029520772 = weight(_text_:c in 3422) [ClassicSimilarity], result of:
              0.029520772 = score(doc=3422,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.22866541 = fieldWeight in 3422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents an unsupervised algorithm for semantic annotation of morphological descriptions of whole organisms. The algorithm is able to annotate plain text descriptions with high accuracy at the clause level by exploiting the corpus itself. In other words, the algorithm does not need lexicons, syntactic parsers, training examples, or annotation templates. The evaluation on two real-life description collections in botany and paleontology shows that the algorithm has the following desirable features: (a) reduces/eliminates manual labor required to compile dictionaries and prepare source documents; (b) improves annotation coverage: the algorithm annotates what appears in documents and is not limited by predefined and often incomplete templates; (c) learns clean and reusable concepts: the algorithm learns organ names and character states that can be used to construct reusable domain lexicons, as opposed to collection-dependent patterns whose applicability is often limited to a particular collection; (d) insensitive to collection size; and (e) runs in linear time with respect to the number of clauses to be annotated.
  13. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.01
    0.007380193 = product of:
      0.014760386 = sum of:
        0.014760386 = product of:
          0.029520772 = sum of:
            0.029520772 = weight(_text_:c in 1055) [ClassicSimilarity], result of:
              0.029520772 = score(doc=1055,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.22866541 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    http://at.yorku.ca/c/b/f/j/99.htm
  14. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.01
    0.0063385232 = product of:
      0.0126770465 = sum of:
        0.0126770465 = product of:
          0.025354093 = sum of:
            0.025354093 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.025354093 = score(doc=3780,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 8.2017 9:24:22
  15. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 2161) [ClassicSimilarity], result of:
              0.024600642 = score(doc=2161,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 5045) [ClassicSimilarity], result of:
              0.024600642 = score(doc=5045,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Mesquita, L.A.P.; Souza, R.R.; Baracho Porto, R.M.A.: Noun phrases in automatic indexing: : a structural analysis of the distribution of relevant terms in doctoral theses (2014) 0.01
    0.0050708186 = product of:
      0.010141637 = sum of:
        0.010141637 = product of:
          0.020283274 = sum of:
            0.020283274 = weight(_text_:22 in 1442) [ClassicSimilarity], result of:
              0.020283274 = score(doc=1442,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.15476047 = fieldWeight in 1442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1442)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.01
    0.0050708186 = product of:
      0.010141637 = sum of:
        0.010141637 = product of:
          0.020283274 = sum of:
            0.020283274 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
              0.020283274 = score(doc=5499,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.15476047 = fieldWeight in 5499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
  19. Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013) 0.00
    0.0049201283 = product of:
      0.009840257 = sum of:
        0.009840257 = product of:
          0.019680513 = sum of:
            0.019680513 = weight(_text_:c in 1016) [ClassicSimilarity], result of:
              0.019680513 = score(doc=1016,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1524436 = fieldWeight in 1016, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1016)
          0.5 = coord(1/2)
      0.5 = coord(1/2)