Search (52 results, page 1 of 3)

  • × theme_ss:"Automatisches Indexieren"
  1. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.08
    0.08028603 = product of:
      0.16057207 = sum of:
        0.14512798 = weight(_text_:headings in 1794) [ClassicSimilarity], result of:
          0.14512798 = score(doc=1794,freq=12.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.6562773 = fieldWeight in 1794, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.015444082 = product of:
          0.030888164 = sum of:
            0.030888164 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
              0.030888164 = score(doc=1794,freq=2.0), product of:
                0.15966953 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045596033 = queryNorm
                0.19345059 = fieldWeight in 1794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1794)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
  2. Golub, K.; Lykke, M.; Tudhope, D.: Enhancing social tagging with automated keywords from the Dewey Decimal Classification (2014) 0.05
    0.047148023 = product of:
      0.094296046 = sum of:
        0.059248257 = weight(_text_:headings in 2918) [ClassicSimilarity], result of:
          0.059248257 = score(doc=2918,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
        0.03504779 = product of:
          0.07009558 = sum of:
            0.07009558 = weight(_text_:terminology in 2918) [ClassicSimilarity], result of:
              0.07009558 = score(doc=2918,freq=2.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.29141995 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval. Design/methodology/approach - Over 11.000 Intute metadata records in politics were used. Totally, 28 politics students were each given four tasks, in which a total of 60 resources were tagged in two different configurations, one with uncontrolled social tags only and another with uncontrolled social tags as well as suggestions from a controlled vocabulary. The controlled vocabulary was DDC comprising also mappings from the Library of Congress Subject Headings. Findings - The results demonstrate the importance of controlled vocabulary suggestions for indexing and retrieval: to help produce ideas of which tags to use, to make it easier to find focus for the tagging, to ensure consistency and to increase the number of access points in retrieval. The value and usefulness of the suggestions proved to be dependent on the quality of the suggestions, both as to conceptual relevance to the user and as to appropriateness of the terminology. Originality/value - No research has investigated the enhancement of social tagging with suggestions from the DDC, an established KOS, in a user trial, comparing social tagging only and social tagging enhanced with the suggestions. This paper is a final reflection on all aspects of the study.
  3. Olsgaard, J.N.; Evans, E.J.: Improving keyword indexing (1981) 0.03
    0.029624129 = product of:
      0.118496515 = sum of:
        0.118496515 = weight(_text_:headings in 4996) [ClassicSimilarity], result of:
          0.118496515 = score(doc=4996,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.5358482 = fieldWeight in 4996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.078125 = fieldNorm(doc=4996)
      0.25 = coord(1/4)
    
    Abstract
    This communication examines some of the most frequently cited critisms of keyword indexing. These critisms include (1) absence of general subject headings, (2) limited entry points, and (3) irrelevant indexing. Some solutions are suggested to meet these critisms.
  4. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.03
    0.029326389 = product of:
      0.117305554 = sum of:
        0.117305554 = weight(_text_:headings in 1717) [ClassicSimilarity], result of:
          0.117305554 = score(doc=1717,freq=4.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.5304626 = fieldWeight in 1717, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
      0.25 = coord(1/4)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
  5. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.03
    0.029326389 = product of:
      0.117305554 = sum of:
        0.117305554 = weight(_text_:headings in 5481) [ClassicSimilarity], result of:
          0.117305554 = score(doc=5481,freq=4.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.5304626 = fieldWeight in 5481, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
      0.25 = coord(1/4)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
  6. Abdul, H.; Khoo, C.: Automatic indexing of medical literature using phrase matching : an exploratory study 0.02
    0.0236993 = product of:
      0.0947972 = sum of:
        0.0947972 = weight(_text_:headings in 3601) [ClassicSimilarity], result of:
          0.0947972 = score(doc=3601,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.42867854 = fieldWeight in 3601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0625 = fieldNorm(doc=3601)
      0.25 = coord(1/4)
    
    Abstract
    Reports the 1st part of a study to apply the technique of phrase matching to the automatic assignment of MeSH subject headings and subheadings to abstracts of periodical articles.
  7. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.02
    0.0214623 = product of:
      0.0858492 = sum of:
        0.0858492 = product of:
          0.1716984 = sum of:
            0.1716984 = weight(_text_:terminology in 1842) [ClassicSimilarity], result of:
              0.1716984 = score(doc=1842,freq=12.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.71383023 = fieldWeight in 1842, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
    Source
    TKE 2005: Proc. of Terminology and Knowledge Engineering (TKE) 2005
  8. Losee, R.M.: ¬A Gray code based ordering for documents on shelves : classification for browsing and retrieval (1992) 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 2335) [ClassicSimilarity], result of:
          0.08294755 = score(doc=2335,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
      0.25 = coord(1/4)
    
    Abstract
    A document classifier places documents together in a linear arrangement for browsing or high-speed access by human or computerised information retrieval systems. Requirements for document classification and browsing systems are developed from similarity measures, distance measures, and the notion of subject aboutness. A requirement that documents be arranged in decreasing order of similarity as the distance from a given document increases can often not be met. Based on these requirements, information-theoretic considerations, and the Gray code, a classification system is proposed that can classifiy documents without human intervention. A measure of classifier performance is developed, and used to evaluate experimental results comparing the distance between subject headings assigned to documents given classifications from the proposed system and the Library of Congress Classification (LCC) system
  9. Shafer, K.: Scorpion Project explores using Dewey to organize the Web (1996) 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 6750) [ClassicSimilarity], result of:
          0.08294755 = score(doc=6750,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 6750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
      0.25 = coord(1/4)
    
    Abstract
    As the amount of accessible information on the WWW increases, so will the cost of accessing it, even if search servcies remain free, due to the increasing amount of time users will have to spend to find needed items. Considers what the seemingly unorganized Web and the organized world of libraries can offer each other. The OCLC Scorpion Project is attempting to combine indexing and cataloguing, specifically focusing on building tools for automatic subject recognition using the technqiues of library science and information retrieval. If subject headings or concept domains can be automatically assigned to electronic items, improved filtering tools for searching can be produced
  10. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2014) 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 1969) [ClassicSimilarity], result of:
          0.08294755 = score(doc=1969,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 1969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1969)
      0.25 = coord(1/4)
    
    Abstract
    The German Integrated Authority File (Gemeinsame Normdatei, GND), provides a broad controlled vocabulary for indexing documents on all subjects. Traditionally used for intellectual subject cataloging primarily for books, the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developing and implementing procedures for automated assignment of subject headings for online publications. This project, its results, and problems are outlined in this article.
  11. Moulaison-Sandy, H.; Adkins, D.; Bossaller, J.; Cho, H.: ¬An automated approach to describing fiction : a methodology to use book reviews to identify affect (2021) 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 710) [ClassicSimilarity], result of:
          0.08294755 = score(doc=710,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 710, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=710)
      0.25 = coord(1/4)
    
    Abstract
    Subject headings and genre terms are notoriously difficult to apply, yet are important for fiction. The current project functions as a proof of concept, using a text-mining methodology to identify affective information (emotion and tone) about fiction titles from professional book reviews as a potential first step in automating the subject analysis process. Findings are presented and discussed, comparing results to the range of aboutness and isness information in library cataloging records. The methodology is likewise presented, and how future work might expand on the current project to enhance catalog records through text-mining is explored.
  12. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.02
    0.020736888 = product of:
      0.08294755 = sum of:
        0.08294755 = weight(_text_:headings in 1139) [ClassicSimilarity], result of:
          0.08294755 = score(doc=1139,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.37509373 = fieldWeight in 1139, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
      0.25 = coord(1/4)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  13. Gil-Leiva, I.: SISA-automatic indexing system for scientific articles : experiments with location heuristics rules versus TF-IDF rules (2017) 0.02
    0.017774476 = product of:
      0.0710979 = sum of:
        0.0710979 = weight(_text_:headings in 3622) [ClassicSimilarity], result of:
          0.0710979 = score(doc=3622,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.3215089 = fieldWeight in 3622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.046875 = fieldNorm(doc=3622)
      0.25 = coord(1/4)
    
    Abstract
    Indexing is contextualized and a brief description is provided of some of the most used automatic indexing systems. We describe SISA, a system which uses location heuristics rules, statistical rules like term frequency (TF) or TF-IDF to obtain automatic or semi-automatic indexing, depending on the user's preference. The aim of this research is to ascertain which rules (location heuristics rules or TF-IDF rules) provide the best indexing terms. SISA is used to obtain the automatic indexing of 200 scientific articles on fruit growing written in Portuguese. It uses, on the one hand, location heuristics rules founded on the value of certain parts of the articles for indexing such as titles, abstracts, keywords, headings, first paragraph, conclusions and references and, on the other, TF-IDF rules. The indexing is then evaluated to ascertain retrieval performance through recall, precision and f-measure. Automatic indexing of the articles with location heuristics rules provided the best results with the evaluation measures.
  14. Vledutz-Stokolov, N.: Concept recognition in an automatic text-processing system for the life sciences (1987) 0.01
    0.014812064 = product of:
      0.059248257 = sum of:
        0.059248257 = weight(_text_:headings in 2849) [ClassicSimilarity], result of:
          0.059248257 = score(doc=2849,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 2849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2849)
      0.25 = coord(1/4)
    
    Abstract
    This article describes a natural-language text-processing system designed as an automatic aid to subject indexing at BIOSIS. The intellectual procedure the system should model is a deep indexing with a controlled vocabulary of biological concepts - Concept Headings (CHs). On the average, ten CHs are assigned to each article by BIOSIS indexers. The automatic procedure consists of two stages: (1) translation of natural-language biological titles into title-semantic representations which are in the constructed formalized language of Concept Primitives, and (2) translation of the latter representations into the language of CHs. The first stage is performed by matching the titles agianst the system's Semantic Vocabulary (SV). The SV currently contains approximately 15.000 biological natural-language terms and their translations in the language of Concept Primitives. Tor the ambiguous terms, the SV contains the algorithmical rules of term disambiguation, ruels based on semantic analysis of the contexts. The second stage of the automatic procedure is performed by matching the title representations against the CH definitions, formulated as Boolean search strategies in the language of Concept Primitives. Three experiments performed with the system and their results are decribed. The most typical problems the system encounters, the problems of lexical and situational ambiguities, are discussed. The disambiguation techniques employed are described and demonstrated in many examples
  15. Humphrey, S.M.; Névéol, A.; Browne, A.; Gobeil, J.; Ruch, P.; Darmoni, S.J.: Comparing a rule-based versus statistical system for automatic categorization of MEDLINE documents according to biomedical specialty (2009) 0.01
    0.014812064 = product of:
      0.059248257 = sum of:
        0.059248257 = weight(_text_:headings in 3300) [ClassicSimilarity], result of:
          0.059248257 = score(doc=3300,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 3300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3300)
      0.25 = coord(1/4)
    
    Abstract
    Automatic document categorization is an important research problem in Information Science and Natural Language Processing. Many applications, including, Word Sense Disambiguation and Information Retrieval in large collections, can benefit from such categorization. This paper focuses on automatic categorization of documents from the biomedical literature into broad discipline-based categories. Two different systems are described and contrasted: CISMeF, which uses rules based on human indexing of the documents by the Medical Subject Headings (MeSH) controlled vocabulary in order to assign metaterms (MTs), and Journal Descriptor Indexing (JDI), based on human categorization of about 4,000 journals and statistical associations between journal descriptors (JDs) and textwords in the documents. We evaluate and compare the performance of these systems against a gold standard of humanly assigned categories for 100 MEDLINE documents, using six measures selected from trec_eval. The results show that for five of the measures performance is comparable, and for one measure JDI is superior. We conclude that these results favor JDI, given the significantly greater intellectual overhead involved in human indexing and maintaining a rule base for mapping MeSH terms to MTs. We also note a JDI method that associates JDs with MeSH indexing rather than textwords, and it may be worthwhile to investigate whether this JDI method (statistical) and CISMeF (rule-based) might be combined and then evaluated showing they are complementary to one another.
  16. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.01
    0.014812064 = product of:
      0.059248257 = sum of:
        0.059248257 = weight(_text_:headings in 3667) [ClassicSimilarity], result of:
          0.059248257 = score(doc=3667,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 3667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3667)
      0.25 = coord(1/4)
    
    Abstract
    Descriptive metadata play a key role in finding relevant search results in large amounts of unstructured data. However, current scientific audiovisual media are provided with little metadata, which makes them hard to find, let alone individual sequences. In this paper, the TIB / AV-Portal is presented as a use case where methods concerning the automatic generation of metadata, a semantic search and cross-lingual retrieval (German/English) have already been applied. These methods result in a better discoverability of the scientific audiovisual media hosted in the portal. Text, speech, and image content of the video are automatically indexed by specialised GND (Gemeinsame Normdatei) subject headings. A semantic search is established based on properties of the GND ontology. The cross-lingual retrieval uses English 'translations' that were derived by an ontology mapping (DBpedia i. a.). Further ways of increasing the discoverability and reuse of the metadata are publishing them as Linked Open Data and interlinking them with other data sets.
  17. Golub, K.: Automatic subject indexing of text (2019) 0.01
    0.014812064 = product of:
      0.059248257 = sum of:
        0.059248257 = weight(_text_:headings in 5268) [ClassicSimilarity], result of:
          0.059248257 = score(doc=5268,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.2679241 = fieldWeight in 5268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5268)
      0.25 = coord(1/4)
    
    Abstract
    Automatic subject indexing addresses problems of scale and sustainability and can be at the same time used to enrich existing metadata records, establish more connections across and between resources from various metadata and resource collec-tions, and enhance consistency of the metadata. In this work, au-tomatic subject indexing focuses on assigning index terms or classes from established knowledge organization systems (KOSs) for subject indexing like thesauri, subject headings systems and classification systems. The following major approaches are dis-cussed, in terms of their similarities and differences, advantages and disadvantages for automatic assigned indexing from KOSs: "text categorization," "document clustering," and "document classification." Text categorization is perhaps the most wide-spread, machine-learning approach with what seems generally good reported performance. Document clustering automatically both creates groups of related documents and extracts names of subjects depicting the group at hand. Document classification re-uses the intellectual effort invested into creating a KOS for sub-ject indexing and even simple string-matching algorithms have been reported to achieve good results, because one concept can be described using a number of different terms, including equiv-alent, related, narrower and broader terms. Finally, applicability of automatic subject indexing to operative information systems and challenges of evaluation are outlined, suggesting the need for more research.
  18. Gödert, W.: Detecting multiword phrases in mathematical text corpora (2012) 0.01
    0.014019116 = product of:
      0.056076463 = sum of:
        0.056076463 = product of:
          0.11215293 = sum of:
            0.11215293 = weight(_text_:terminology in 466) [ClassicSimilarity], result of:
              0.11215293 = score(doc=466,freq=2.0), product of:
                0.24053115 = queryWeight, product of:
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.045596033 = queryNorm
                0.46627194 = fieldWeight in 466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2752647 = idf(docFreq=614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We present an approach for detecting multiword phrases in mathematical text corpora. The method used is based on characteristic features of mathematical terminology. It makes use of a software tool named Lingo which allows to identify words by means of previously defined dictionaries for specific word classes as adjectives, personal names or nouns. The detection of multiword groups is done algorithmically. Possible advantages of the method for indexing and information retrieval and conclusions for applying dictionary-based methods of automatic indexing instead of stemming procedures are discussed.
  19. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.012355265 = product of:
      0.04942106 = sum of:
        0.04942106 = product of:
          0.09884212 = sum of:
            0.09884212 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.09884212 = score(doc=402,freq=2.0), product of:
                0.15966953 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045596033 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  20. Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013) 0.01
    0.01184965 = product of:
      0.0473986 = sum of:
        0.0473986 = weight(_text_:headings in 1016) [ClassicSimilarity], result of:
          0.0473986 = score(doc=1016,freq=2.0), product of:
            0.22113821 = queryWeight, product of:
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.045596033 = queryNorm
            0.21433927 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.849944 = idf(docFreq=940, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
      0.25 = coord(1/4)
    
    Abstract
    Relationships between terms and features are an essential component of thesauri, ontologies, and a range of controlled vocabularies. In this article, we describe ways to identify important concepts in documents using the relationships in a thesaurus or other vocabulary structures. We introduce a methodology for the analysis and modeling of the indexing process based on a weighted random walk algorithm. The primary goal of this research is the analysis of the contribution of thesaurus structure to the indexing process. The resulting models are evaluated in the context of automatic subject indexing using four collections of documents pre-indexed with 4 different thesauri (AGROVOC [UN Food and Agriculture Organization], high-energy physics taxonomy [HEP], National Agricultural Library Thesaurus [NALT], and medical subject headings [MeSH]). We also introduce a thesaurus-centric matching algorithm intended to improve the quality of candidate concepts. In all cases, the weighted random walk improves automatic indexing performance over matching alone with an increase in average precision (AP) of 9% for HEP, 11% for MeSH, 35% for NALT, and 37% for AGROVOC. The results of the analysis support our hypothesis that subject indexing is in part a browsing process, and that using the vocabulary and its structure in a thesaurus contributes to the indexing process. The amount that the vocabulary structure contributes was found to differ among the 4 thesauri, possibly due to the vocabulary used in the corresponding thesauri and the structural relationships between the terms. Each of the thesauri and the manual indexing associated with it is characterized using the methods developed here.

Years

Languages

  • e 36
  • d 15
  • ru 1
  • More… Less…

Types

  • a 48
  • el 4
  • x 3
  • m 1
  • More… Less…