Search (86 results, page 1 of 5)

  • × theme_ss:"Automatisches Indexieren"
  1. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.04
    0.04274738 = product of:
      0.08549476 = sum of:
        0.08549476 = sum of:
          0.057240717 = weight(_text_:classification in 1441) [ClassicSimilarity], result of:
            0.057240717 = score(doc=1441,freq=12.0), product of:
              0.16603322 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05213454 = queryNorm
              0.3447546 = fieldWeight in 1441, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
          0.028254041 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
            0.028254041 = score(doc=1441,freq=2.0), product of:
              0.18256627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05213454 = queryNorm
              0.15476047 = fieldWeight in 1441, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1441)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Ward, M.L.: ¬The future of the human indexer (1996) 0.04
    0.03871685 = product of:
      0.0774337 = sum of:
        0.0774337 = sum of:
          0.03505264 = weight(_text_:classification in 7244) [ClassicSimilarity], result of:
            0.03505264 = score(doc=7244,freq=2.0), product of:
              0.16603322 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05213454 = queryNorm
              0.21111822 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
          0.04238106 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
            0.04238106 = score(doc=7244,freq=2.0), product of:
              0.18256627 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05213454 = queryNorm
              0.23214069 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
      0.5 = coord(1/2)
    
    Abstract
    Considers the principles of indexing and the intellectual skills involved in order to determine what automatic indexing systems would be required in order to supplant or complement the human indexer. Good indexing requires: considerable prior knowledge of the literature; judgement as to what to index and what depth to index; reading skills; abstracting skills; and classification skills, Illustrates these features with a detailed description of abstracting and indexing processes involved in generating entries for the mechanical engineering database POWERLINK. Briefly assesses the possibility of replacing human indexers with specialist indexing software, with particular reference to the Object Analyzer from the InTEXT automatic indexing system and using the criteria described for human indexers. At present, it is unlikely that the automatic indexer will replace the human indexer, but when more primary texts are available in electronic form, it may be a useful productivity tool for dealing with large quantities of low grade texts (should they be wanted in the database)
    Date
    9. 2.1997 18:44:22
  3. Sparck Jones, K.: Automatic keyword classification for information retrieval (1971) 0.03
    0.02921053 = product of:
      0.05842106 = sum of:
        0.05842106 = product of:
          0.11684212 = sum of:
            0.11684212 = weight(_text_:classification in 5176) [ClassicSimilarity], result of:
              0.11684212 = score(doc=5176,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.70372736 = fieldWeight in 5176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.15625 = fieldNorm(doc=5176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.03
    0.028254041 = product of:
      0.056508083 = sum of:
        0.056508083 = product of:
          0.113016166 = sum of:
            0.113016166 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.113016166 = score(doc=402,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  5. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.024722286 = product of:
      0.04944457 = sum of:
        0.04944457 = product of:
          0.09888914 = sum of:
            0.09888914 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09888914 = score(doc=262,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
  6. Hlava, M.M.K.: Automatic indexing : comparing rule-based and statistics-based indexing systems (2005) 0.02
    0.024722286 = product of:
      0.04944457 = sum of:
        0.04944457 = product of:
          0.09888914 = sum of:
            0.09888914 = weight(_text_:22 in 6265) [ClassicSimilarity], result of:
              0.09888914 = score(doc=6265,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5416616 = fieldWeight in 6265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6265)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information outlook. 9(2005) no.8, S.22-23
  7. Griffiths, A.; Robinson, L.A.; Willett, P.: Hierarchic agglomerative clustering methods for automatic document classification (1984) 0.02
    0.023368426 = product of:
      0.04673685 = sum of:
        0.04673685 = product of:
          0.0934737 = sum of:
            0.0934737 = weight(_text_:classification in 2414) [ClassicSimilarity], result of:
              0.0934737 = score(doc=2414,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5629819 = fieldWeight in 2414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=2414)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Borko, H.; Bernick, M.: Automatic document classification : T.2 (1964) 0.02
    0.023368426 = product of:
      0.04673685 = sum of:
        0.04673685 = product of:
          0.0934737 = sum of:
            0.0934737 = weight(_text_:classification in 4197) [ClassicSimilarity], result of:
              0.0934737 = score(doc=4197,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5629819 = fieldWeight in 4197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=4197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Frants, V.I.; Kamenoff, N.I.; Shapiro, J.: ¬One approach to classification of users and automatic clustering of documents (1993) 0.02
    0.023368426 = product of:
      0.04673685 = sum of:
        0.04673685 = product of:
          0.0934737 = sum of:
            0.0934737 = weight(_text_:classification in 4569) [ClassicSimilarity], result of:
              0.0934737 = score(doc=4569,freq=8.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5629819 = fieldWeight in 4569, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4569)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Shows how to automatically construct a classification of users and a clustering of documents on the basis of users' information needs by creating clusters of documents and cross-references among clusters using users' search requests. Examines feedback in the construction of this classification and clustering so that the classification can be changed over time to reflect the changing needs of the users
  10. Sparck Jones, K.; Jackson, D.M.: ¬The use of automatically obtained keyword classification for information retrieval (1970) 0.02
    0.023368426 = product of:
      0.04673685 = sum of:
        0.04673685 = product of:
          0.0934737 = sum of:
            0.0934737 = weight(_text_:classification in 5177) [ClassicSimilarity], result of:
              0.0934737 = score(doc=5177,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5629819 = fieldWeight in 5177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=5177)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Borko, H.; Bernick, M.: Automatic document classification : T.1 (1963) 0.02
    0.023368426 = product of:
      0.04673685 = sum of:
        0.04673685 = product of:
          0.0934737 = sum of:
            0.0934737 = weight(_text_:classification in 5487) [ClassicSimilarity], result of:
              0.0934737 = score(doc=5487,freq=2.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5629819 = fieldWeight in 5487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=5487)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.02
    0.02146527 = product of:
      0.04293054 = sum of:
        0.04293054 = product of:
          0.08586108 = sum of:
            0.08586108 = weight(_text_:classification in 5480) [ClassicSimilarity], result of:
              0.08586108 = score(doc=5480,freq=12.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.5171319 = fieldWeight in 5480, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    (Automatic) document classification is generally defined as content-based assignment of one or more predefined categories to documents. Usually, machine learning, statistical pattern recognition, or neural network approaches are used to construct classifiers automatically. In this paper we thoroughly evaluate a wide variety of these methods on a document classification task for German text. We evaluate different feature construction and selection methods and various classifiers. Our main results are: (1) feature selection is necessary not only to reduce learning and classification time, but also to avoid overfitting (even for Support Vector Machines); (2) surprisingly, our morphological analysis does not improve classification quality compared to a letter 5-gram approach; (3) Support Vector Machines are significantly better than all other classification methods
  13. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.08476212 = score(doc=58,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
  14. Hauer, M.: Automatische Indexierung (2000) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.08476212 = score(doc=5887,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  15. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.08476212 = score(doc=2051,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
  16. Hauer, M.: Tiefenindexierung im Bibliothekskatalog : 17 Jahre intelligentCAPTURE (2019) 0.02
    0.02119053 = product of:
      0.04238106 = sum of:
        0.04238106 = product of:
          0.08476212 = sum of:
            0.08476212 = weight(_text_:22 in 5629) [ClassicSimilarity], result of:
              0.08476212 = score(doc=5629,freq=2.0), product of:
                0.18256627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05213454 = queryNorm
                0.46428138 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    B.I.T.online. 22(2019) H.2, S.163-166
  17. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.02
    0.020654965 = product of:
      0.04130993 = sum of:
        0.04130993 = product of:
          0.08261986 = sum of:
            0.08261986 = weight(_text_:classification in 658) [ClassicSimilarity], result of:
              0.08261986 = score(doc=658,freq=16.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.49761042 = fieldWeight in 658, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automated text classification is an important function for many AI systems relevant to libraries, including automated subject indexing and classification. When implemented using the traditional natural language processing (NLP) paradigm, one key part of the process is the normalization of words using stemming or lemmatization, which reduces the amount of linguistic variation and often improves the quality of classification. In this paper, we compare the output of seven different text lemmatization algorithms as well as two baseline methods. We measure how the choice of method affects the quality of text classification using example corpora in three languages. The experiments have been performed using the open source Annif toolkit for automated subject indexing and classification, but should generalize also to other NLP toolkits and similar text classification tasks. The results show that lemmatization methods in most cases outperform baseline methods in text classification particularly for Finnish and Swedish text, but not English, where baseline methods are most effective. The differences between lemmatization methods are quite small. The systematic comparison will help optimize text classification pipelines and inform the further development of the Annif toolkit to incorporate a wider choice of normalization methods.
  18. Losee, R.M.: ¬A Gray code based ordering for documents on shelves : classification for browsing and retrieval (1992) 0.02
    0.020447372 = product of:
      0.040894743 = sum of:
        0.040894743 = product of:
          0.081789486 = sum of:
            0.081789486 = weight(_text_:classification in 2335) [ClassicSimilarity], result of:
              0.081789486 = score(doc=2335,freq=8.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.49260917 = fieldWeight in 2335, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A document classifier places documents together in a linear arrangement for browsing or high-speed access by human or computerised information retrieval systems. Requirements for document classification and browsing systems are developed from similarity measures, distance measures, and the notion of subject aboutness. A requirement that documents be arranged in decreasing order of similarity as the distance from a given document increases can often not be met. Based on these requirements, information-theoretic considerations, and the Gray code, a classification system is proposed that can classifiy documents without human intervention. A measure of classifier performance is developed, and used to evaluate experimental results comparing the distance between subject headings assigned to documents given classifications from the proposed system and the Library of Congress Classification (LCC) system
  19. Micco, M.; Popp, R.: Improving library subject access (ILSA) : a theory of clustering based in classification (1994) 0.02
    0.017707944 = product of:
      0.035415888 = sum of:
        0.035415888 = product of:
          0.070831776 = sum of:
            0.070831776 = weight(_text_:classification in 7715) [ClassicSimilarity], result of:
              0.070831776 = score(doc=7715,freq=6.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.42661208 = fieldWeight in 7715, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7715)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The ILSA prototype was developed using an object-oriented multimedia user interfcae on six NeXT workstations with two databases: the first with 100.000 MARC records and the second with 20.000 additional records enhanced with table of contents data. The items are grouped into subject clusters consisting of the classification number and the first subject heading assigned. Every other distinct keyword in the MARC record is linked to the subject cluster in an automated natural language mapping scheme, which leads the user from the term entered to the controlled vocabulary of the subject clusters in which the term appeared. The use of a hierarchical classification number (Dewey) makes it possible to broaden or narrow a search at will
  20. Gomez, I.: Coping with the problem of subject classification diversity (1996) 0.02
    0.017707944 = product of:
      0.035415888 = sum of:
        0.035415888 = product of:
          0.070831776 = sum of:
            0.070831776 = weight(_text_:classification in 5074) [ClassicSimilarity], result of:
              0.070831776 = score(doc=5074,freq=6.0), product of:
                0.16603322 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05213454 = queryNorm
                0.42661208 = fieldWeight in 5074, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5074)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The delimination of a research field in bibliometric studies presents the problem of the diversity of subject classifications used in the sources of input and output data. Classification of documents according the thematic codes or keywords is the most accurate method, mainly used is specialized bibliographic or patent databases. Classification of journals in disciplines presents lower specifity, and some shortcomings as the change over time of both journals and disciplines and the increasing interdisciplinarity of research. Standardization of subject classifications emerges as an important point in bibliometric studies in order to allow international comparisons, although flexibility is needed to meet the needs of local studies

Languages

  • e 67
  • d 18
  • ru 1
  • More… Less…

Types

  • a 78
  • el 7
  • m 2
  • s 2
  • x 2
  • p 1
  • More… Less…