Search (9 results, page 1 of 1)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Automatisches Indexieren"
  1. Asula, M.; Makke, J.; Freienthal, L.; Kuulmets, H.-A.; Sirel, R.: Kratt: developing an automatic subject indexing tool for the National Library of Estonia : how to transfer metadata information among work cluster members (2021) 0.03
    0.025524741 = product of:
      0.051049482 = sum of:
        0.051049482 = product of:
          0.102098964 = sum of:
            0.102098964 = weight(_text_:subject in 723) [ClassicSimilarity], result of:
              0.102098964 = score(doc=723,freq=14.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.6272999 = fieldWeight in 723, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.046875 = fieldNorm(doc=723)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Manual subject indexing in libraries is a time-consuming and costly process and the quality of the assigned subjects is affected by the cataloger's knowledge on the specific topics contained in the book. Trying to solve these issues, we exploited the opportunities arising from artificial intelligence to develop Kratt: a prototype of an automatic subject indexing tool. Kratt is able to subject index a book independent of its extent and genre with a set of keywords present in the Estonian Subject Thesaurus. It takes Kratt approximately one minute to subject index a book, outperforming humans 10-15 times. Although the resulting keywords were not considered satisfactory by the catalogers, the ratings of a small sample of regular library users showed more promise. We also argue that the results can be enhanced by including a bigger corpus for training the model and applying more careful preprocessing techniques.
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess
  2. Golub, K.: Automated subject indexing : an overview (2021) 0.02
    0.022510704 = product of:
      0.045021407 = sum of:
        0.045021407 = product of:
          0.090042815 = sum of:
            0.090042815 = weight(_text_:subject in 718) [ClassicSimilarity], result of:
              0.090042815 = score(doc=718,freq=8.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.5532265 = fieldWeight in 718, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=718)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the face of the ever-increasing document volume, libraries around the globe are more and more exploring (semi-) automated approaches to subject indexing. This helps sustain bibliographic objectives, enrich metadata, and establish more connections across documents from various collections, effectively leading to improved information retrieval and access. However, generally accepted automated approaches that are functional in operative systems are lacking. This article aims to provide an overview of basic principles used for automated subject indexing, major approaches in relation to their possible application in actual library systems, existing working examples, as well as related challenges calling for further research.
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess
  3. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.02
    0.022510704 = product of:
      0.045021407 = sum of:
        0.045021407 = product of:
          0.090042815 = sum of:
            0.090042815 = weight(_text_:subject in 1139) [ClassicSimilarity], result of:
              0.090042815 = score(doc=1139,freq=8.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.5532265 = fieldWeight in 1139, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  4. Ahmed, M.: Automatic indexing for agriculture : designing a framework by deploying Agrovoc, Agris and Annif (2023) 0.02
    0.019692764 = product of:
      0.039385527 = sum of:
        0.039385527 = product of:
          0.078771055 = sum of:
            0.078771055 = weight(_text_:subject in 1024) [ClassicSimilarity], result of:
              0.078771055 = score(doc=1024,freq=12.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.48397237 = fieldWeight in 1024, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1024)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There are several ways to employ machine learning for automating subject indexing. One popular strategy is to utilize a supervised learning algorithm to train a model on a set of documents that have been manually indexed by subject matter using a standard vocabulary. The resulting model can then predict the subject of new and previously unseen documents by identifying patterns learned from the training data. To do this, the first step is to gather a large dataset of documents and manually assign each document a set of subject keywords/descriptors from a controlled vocabulary (e.g., from Agrovoc). Next, the dataset (obtained from Agris) can be divided into - i) a training dataset, and ii) a test dataset. The training dataset is used to train the model, while the test dataset is used to evaluate the model's performance. Machine learning can be a powerful tool for automating the process of subject indexing. This research is an attempt to apply Annif (http://annif. org/), an open-source AI/ML framework, to autogenerate subject keywords/descriptors for documentary resources in the domain of agriculture. The training dataset is obtained from Agris, which applies the Agrovoc thesaurus as a vocabulary tool (https://www.fao.org/agris/download).
  5. Moulaison-Sandy, H.; Adkins, D.; Bossaller, J.; Cho, H.: ¬An automated approach to describing fiction : a methodology to use book reviews to identify affect (2021) 0.02
    0.019494843 = product of:
      0.038989685 = sum of:
        0.038989685 = product of:
          0.07797937 = sum of:
            0.07797937 = weight(_text_:subject in 710) [ClassicSimilarity], result of:
              0.07797937 = score(doc=710,freq=6.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.4791082 = fieldWeight in 710, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=710)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject headings and genre terms are notoriously difficult to apply, yet are important for fiction. The current project functions as a proof of concept, using a text-mining methodology to identify affective information (emotion and tone) about fiction titles from professional book reviews as a potential first step in automating the subject analysis process. Findings are presented and discussed, comparing results to the range of aboutness and isness information in library cataloging records. The methodology is likewise presented, and how future work might expand on the current project to enhance catalog records through text-mining is explored.
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess
  6. Oliver, C.: Leveraging KOS to extend our reach with automated processes (2021) 0.02
    0.018191395 = product of:
      0.03638279 = sum of:
        0.03638279 = product of:
          0.07276558 = sum of:
            0.07276558 = weight(_text_:subject in 722) [ClassicSimilarity], result of:
              0.07276558 = score(doc=722,freq=4.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.4470745 = fieldWeight in 722, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0625 = fieldNorm(doc=722)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article provides a conclusion to the special issue on Artificial Intelligence (AI) and Automated Processes for Subject Access. The authors who contributed to this special issue have provoked interesting questions as well as bringing attention to important issues. This concluding article looks at common themes and highlights some of the questions raised.
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess
  7. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.01
    0.0139248865 = product of:
      0.027849773 = sum of:
        0.027849773 = product of:
          0.055699546 = sum of:
            0.055699546 = weight(_text_:subject in 658) [ClassicSimilarity], result of:
              0.055699546 = score(doc=658,freq=6.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.34222013 = fieldWeight in 658, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  8. Villaespesa, E.; Crider, S.: ¬A critical comparison analysis between human and machine-generated tags for the Metropolitan Museum of Art's collection (2021) 0.01
    0.011369622 = product of:
      0.022739245 = sum of:
        0.022739245 = product of:
          0.04547849 = sum of:
            0.04547849 = weight(_text_:subject in 341) [ClassicSimilarity], result of:
              0.04547849 = score(doc=341,freq=4.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.27942157 = fieldWeight in 341, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems. Design/methodology/approach This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags. Findings This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results. Practical implications This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects. Originality/value The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.
  9. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.01
    0.009647444 = product of:
      0.019294888 = sum of:
        0.019294888 = product of:
          0.038589776 = sum of:
            0.038589776 = weight(_text_:subject in 720) [ClassicSimilarity], result of:
              0.038589776 = score(doc=720,freq=2.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.23709705 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Teil eines Themenheftes: Artificial intelligence (AI) and automated processes for subject sccess