Search (47 results, page 2 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2010 TO 2020}
  1. Kasprzik, A.: Automatisierte und semiautomatisierte Klassifizierung : eine Analyse aktueller Projekte (2014) 0.00
    0.0029552064 = product of:
      0.010835757 = sum of:
        0.0054389704 = product of:
          0.010877941 = sum of:
            0.010877941 = weight(_text_:h in 2470) [ClassicSimilarity], result of:
              0.010877941 = score(doc=2470,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.16469726 = fieldWeight in 2470, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2470)
          0.5 = coord(1/2)
        0.0033135647 = weight(_text_:a in 2470) [ClassicSimilarity], result of:
          0.0033135647 = score(doc=2470,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.10809815 = fieldWeight in 2470, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2470)
        0.0020832212 = weight(_text_:s in 2470) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=2470,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 2470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2470)
      0.27272728 = coord(3/11)
    
    Source
    Perspektive Bibliothek. 3(2014) H.1, S.85-110
    Type
    a
  2. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.00
    0.0029003182 = product of:
      0.0106345 = sum of:
        0.004532476 = product of:
          0.009064952 = sum of:
            0.009064952 = weight(_text_:h in 237) [ClassicSimilarity], result of:
              0.009064952 = score(doc=237,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.13724773 = fieldWeight in 237, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=237)
          0.5 = coord(1/2)
        0.0043660053 = weight(_text_:a in 237) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=237,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 237, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
        0.0017360178 = weight(_text_:s in 237) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=237,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.27272728 = coord(3/11)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.889-903
    Type
    a
  3. Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories (2015) 0.00
    0.0029003182 = product of:
      0.0106345 = sum of:
        0.004532476 = product of:
          0.009064952 = sum of:
            0.009064952 = weight(_text_:h in 2194) [ClassicSimilarity], result of:
              0.009064952 = score(doc=2194,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.13724773 = fieldWeight in 2194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2194)
          0.5 = coord(1/2)
        0.0043660053 = weight(_text_:a in 2194) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=2194,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 2194, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2194)
        0.0017360178 = weight(_text_:s in 2194) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=2194,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 2194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2194)
      0.27272728 = coord(3/11)
    
    Abstract
    In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
    Source
    Knowledge organization. 42(2015) no.3, S.139-153
    Type
    a
  4. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.00
    0.0029003182 = product of:
      0.0106345 = sum of:
        0.004532476 = product of:
          0.009064952 = sum of:
            0.009064952 = weight(_text_:h in 2836) [ClassicSimilarity], result of:
              0.009064952 = score(doc=2836,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.13724773 = fieldWeight in 2836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2836)
          0.5 = coord(1/2)
        0.0043660053 = weight(_text_:a in 2836) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=2836,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 2836, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
        0.0017360178 = weight(_text_:s in 2836) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=2836,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 2836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
      0.27272728 = coord(3/11)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
    Source
    Knowledge organization. 43(2016) no.1, S.56-63
    Type
    a
  5. Wang, H.; Hong, M.: Supervised Hebb rule based feature selection for text classification (2019) 0.00
    0.0029003182 = product of:
      0.0106345 = sum of:
        0.004532476 = product of:
          0.009064952 = sum of:
            0.009064952 = weight(_text_:h in 5036) [ClassicSimilarity], result of:
              0.009064952 = score(doc=5036,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.13724773 = fieldWeight in 5036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5036)
          0.5 = coord(1/2)
        0.0043660053 = weight(_text_:a in 5036) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=5036,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 5036, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5036)
        0.0017360178 = weight(_text_:s in 5036) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=5036,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 5036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5036)
      0.27272728 = coord(3/11)
    
    Abstract
    Text documents usually contain high dimensional non-discriminative (irrelevant and noisy) terms which lead to steep computational costs and poor learning performance of text classification. One of the effective solutions for this problem is feature selection which aims to identify discriminative terms from text data. This paper proposes a method termed "Hebb rule based feature selection (HRFS)". HRFS is based on supervised Hebb rule and assumes that terms and classes are neurons and select terms under the assumption that a term is discriminative if it keeps "exciting" the corresponding classes. This assumption can be explained as "a term is highly correlated with a class if it is able to keep "exciting" the class according to the original Hebb postulate. Six benchmarking datasets are used to compare HRFS with other seven feature selection methods. Experimental results indicate that HRFS is effective to achieve better performance than the compared methods. HRFS can identify discriminative terms in the view of synapse between neurons. Moreover, HRFS is also efficient because it can be described in the view of matrix operation to decrease complexity of feature selection.
    Source
    Information processing and management. 56(2019) no.1, S.167-191
    Type
    a
  6. Billal, B.; Fonseca, A.; Sadat, F.; Lounis, H.: Semi-supervised learning and social media text analysis towards multi-labeling categorization (2017) 0.00
    0.0027805802 = product of:
      0.01019546 = sum of:
        0.0036259806 = product of:
          0.007251961 = sum of:
            0.007251961 = weight(_text_:h in 4095) [ClassicSimilarity], result of:
              0.007251961 = score(doc=4095,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.10979818 = fieldWeight in 4095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4095)
          0.5 = coord(1/2)
        0.0051806658 = weight(_text_:a in 4095) [ClassicSimilarity], result of:
          0.0051806658 = score(doc=4095,freq=22.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.16900843 = fieldWeight in 4095, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4095)
        0.0013888142 = weight(_text_:s in 4095) [ClassicSimilarity], result of:
          0.0013888142 = score(doc=4095,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.048049565 = fieldWeight in 4095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=4095)
      0.27272728 = coord(3/11)
    
    Abstract
    In traditional text classification, classes are mutually exclusive, i.e. it is not possible to have one text or text fragment classified into more than one class. On the other hand, in multi-label classification an individual text may belong to several classes simultaneously. This type of classification is required by a large number of current applications such as big data classification, images and video annotation. Supervised learning is the most used type of machine learning in the classification task. It requires large quantities of labeled data and the intervention of a human tagger in the creation of the training sets. When the data sets become very large or heavily noisy, this operation can be tedious, prone to error and time consuming. In this case, semi-supervised learning, which requires only few labels, is a better choice. In this paper, we study and evaluate several methods to address the problem of multi-label classification using semi-supervised learning and data from social networks. First, we propose a linguistic pre-processing involving tokeni-sation, recognition of named entities and hashtag segmentation in order to decrease the noise in this type of massive and unstructured real data and then we perform a word sense disambiguation using WordNet. Second, several experiments related to multi-label classification and semi-supervised learning are carried out on these data sets and compared to each other. These evaluations compare the results of the approaches considered. This paper proposes a method for combining semi-supervised methods with a graph method for the extraction of subjects in social networks using a multi-label classification approach. Experiments show that the performance of the proposed model increases in 4 p.p. the precision of the classification when compared to a baseline.
    Pages
    S.1907-1916
    Type
    a
  7. HaCohen-Kerner, Y.; Beck, H.; Yehudai, E.; Rosenstein, M.; Mughaz, D.: Cuisine : classification using stylistic feature sets and/or name-based feature sets (2010) 0.00
    0.0024626723 = product of:
      0.009029798 = sum of:
        0.004532476 = product of:
          0.009064952 = sum of:
            0.009064952 = weight(_text_:h in 3706) [ClassicSimilarity], result of:
              0.009064952 = score(doc=3706,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.13724773 = fieldWeight in 3706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3706)
          0.5 = coord(1/2)
        0.0027613041 = weight(_text_:a in 3706) [ClassicSimilarity], result of:
          0.0027613041 = score(doc=3706,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.090081796 = fieldWeight in 3706, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3706)
        0.0017360178 = weight(_text_:s in 3706) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=3706,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3706)
      0.27272728 = coord(3/11)
    
    Abstract
    Document classification presents challenges due to the large number of features, their dependencies, and the large number of training documents. In this research, we investigated the use of six stylistic feature sets (including 42 features) and/or six name-based feature sets (including 234 features) for various combinations of the following classification tasks: ethnic groups of the authors and/or periods of time when the documents were written and/or places where the documents were written. The investigated corpus contains Jewish Law articles written in Hebrew-Aramaic, which present interesting problems for classification. Our system CUISINE (Classification UsIng Stylistic feature sets and/or NamE-based feature sets) achieves accuracy results between 90.71 to 98.99% for the seven classification experiments (ethnicity, time, place, ethnicity&time, ethnicity&place, time&place, ethnicity&time&place). For the first six tasks, the stylistic feature sets in general and the quantitative feature set in particular are enough for excellent classification results. In contrast, the name-based feature sets are rather poor for these tasks. However, for the most complex task (ethnicity&time&place), a hill-climbing model using all feature sets succeeds in significantly improving the classification results. Most of the stylistic features (34 of 42) are language-independent and domain-independent. These features might be useful to the community at large, at least for rather simple tasks.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1644-1657
    Type
    a
  8. Groß, T.; Faden, M.: Automatische Indexierung elektronischer Dokumente an der Deutschen Zentralbibliothek für Wirtschaftswissenschaften : Bericht über die Jahrestagung der Internationalen Buchwissenschaftlichen Gesellschaft (2010) 0.00
    0.0017936793 = product of:
      0.006576824 = sum of:
        0.0036259806 = product of:
          0.007251961 = sum of:
            0.007251961 = weight(_text_:h in 4051) [ClassicSimilarity], result of:
              0.007251961 = score(doc=4051,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.10979818 = fieldWeight in 4051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
          0.5 = coord(1/2)
        0.0015620294 = weight(_text_:a in 4051) [ClassicSimilarity], result of:
          0.0015620294 = score(doc=4051,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.050957955 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.0013888142 = weight(_text_:s in 4051) [ClassicSimilarity], result of:
          0.0013888142 = score(doc=4051,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.048049565 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
      0.27272728 = coord(3/11)
    
    Source
    Bibliotheksdienst. 44(2010) H.12, S.1120-1135
    Type
    a
  9. Malo, P.; Sinha, A.; Wallenius, J.; Korhonen, P.: Concept-based document classification using Wikipedia and value function (2011) 0.00
    0.0016567915 = product of:
      0.009112353 = sum of:
        0.0070291325 = weight(_text_:a in 4948) [ClassicSimilarity], result of:
          0.0070291325 = score(doc=4948,freq=18.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.22931081 = fieldWeight in 4948, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4948)
        0.0020832212 = weight(_text_:s in 4948) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=4948,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 4948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4948)
      0.18181819 = coord(2/11)
    
    Abstract
    In this article, we propose a new concept-based method for document classification. The conceptual knowledge associated with the words is drawn from Wikipedia. The purpose is to utilize the abundant semantic relatedness information available in Wikipedia in an efficient value function-based query learning algorithm. The procedure learns the value function by solving a simple linear programming problem formulated using the training documents. The learning involves a step-wise iterative process that helps in generating a value function with an appropriate set of concepts (dimensions) chosen from a collection of concepts. Once the value function is formulated, it is utilized to make a decision between relevance and irrelevance. The value assigned to a particular document from the value function can be further used to rank the documents according to their relevance. Reuters newswire documents have been used to evaluate the efficacy of the procedure. An extensive comparison with other frameworks has been performed. The results are promising.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2496-2511
    Type
    a
  10. Maghsoodi, N.; Homayounpour, M.M.: Improving Farsi multiclass text classification using a thesaurus and two-stage feature selection (2011) 0.00
    0.0014930638 = product of:
      0.00821185 = sum of:
        0.0064758323 = weight(_text_:a in 4775) [ClassicSimilarity], result of:
          0.0064758323 = score(doc=4775,freq=22.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.21126054 = fieldWeight in 4775, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4775)
        0.0017360178 = weight(_text_:s in 4775) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=4775,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 4775, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4775)
      0.18181819 = coord(2/11)
    
    Abstract
    The progressive increase of information content has recently made it necessary to create a system for automatic classification of documents. In this article, a system is presented for the categorization of multiclass Farsi documents that requires fewer training examples and can help to compensate the shortcoming of the standard training dataset. The new idea proposed in the present article is based on extending the feature vector by adding some words extracted from a thesaurus and then filtering the new feature vector by applying secondary feature selection to discard inappropriate features. In fact, a phase of secondary feature selection is applied to choose more appropriate features among the features added from a thesaurus to enhance the effect of using a thesaurus on the efficiency of the classifier. To evaluate the proposed system, a corpus is gathered from the Farsi Wikipedia website and some articles in the Hamshahri newspaper, the Roshd periodical, and the Soroush magazine. In addition to studying the role of a thesaurus and applying secondary feature selection, the effect of a various number of categories, size of the training dataset, and average number of words in the test data also are examined. As the results indicate, classification efficiency improves by applying this approach, especially when available data is not sufficient for some text categories.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.10, S.2055-2066
    Type
    a
  11. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.00
    0.0014882411 = product of:
      0.008185326 = sum of:
        0.0052392064 = weight(_text_:a in 3464) [ClassicSimilarity], result of:
          0.0052392064 = score(doc=3464,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.1709182 = fieldWeight in 3464, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.00294612 = weight(_text_:s in 3464) [ClassicSimilarity], result of:
          0.00294612 = score(doc=3464,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.101928525 = fieldWeight in 3464, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
      0.18181819 = coord(2/11)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
    Type
    a
  12. Borodin, Y.; Polishchuk, V.; Mahmud, J.; Ramakrishnan, I.V.; Stent, A.: Live and learn from mistakes : a lightweight system for document classification (2013) 0.00
    0.0014382693 = product of:
      0.007910481 = sum of:
        0.0061744633 = weight(_text_:a in 2722) [ClassicSimilarity], result of:
          0.0061744633 = score(doc=2722,freq=20.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.20142901 = fieldWeight in 2722, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2722)
        0.0017360178 = weight(_text_:s in 2722) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=2722,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 2722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2722)
      0.18181819 = coord(2/11)
    
    Abstract
    We present a Life-Long Learning from Mistakes (3LM) algorithm for document classification, which could be used in various scenarios such as spam filtering, blog classification, and web resource categorization. We extend the ideas of online clustering and batch-mode centroid-based classification to online learning with negative feedback. The 3LM is a competitive learning algorithm, which avoids over-smoothing, characteristic of the centroid-based classifiers, by using a different class representative, which we call clusterhead. The clusterheads competing for vector-space dominance are drawn toward misclassified documents, eventually bringing the model to a "balanced state" for a fixed distribution of documents. Subsequently, the clusterheads oscillate between the misclassified documents, heuristically minimizing the rate of misclassifications, an NP-complete problem. Further, the 3LM algorithm prevents over-fitting by "leashing" the clusterheads to their respective centroids. A clusterhead provably converges if its class can be separated by a hyper-plane from all other classes. Lifelong learning with fixed learning rate allows 3LM to adapt to possibly changing distribution of the data and continually learn and unlearn document classes. We report on our experiments, which demonstrate high accuracy of document classification on Reuters21578, OHSUMED, and TREC07p-spam datasets. The 3LM algorithm did not show over-fitting, while consistently outperforming centroid-based, Naïve Bayes, C4.5, AdaBoost, kNN, and SVM whose accuracy had been reported on the same three corpora.
    Source
    Information processing and management. 49(2013) no.1, S.83-98
    Type
    a
  13. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.00
    0.0014222699 = product of:
      0.007822484 = sum of:
        0.0057392623 = weight(_text_:a in 4758) [ClassicSimilarity], result of:
          0.0057392623 = score(doc=4758,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.18723148 = fieldWeight in 4758, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
        0.0020832212 = weight(_text_:s in 4758) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=4758,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 4758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.18181819 = coord(2/11)
    
    Abstract
    In this article, we present a study about classification methods for large-scale categorization of product offers on e-shopping web sites. We present a study about the performance of previously proposed approaches and deployed a probabilistic approach to model the classification problem. We also studied an alternative way of modeling information about the description of product offers and investigated the usage of price and store of product offers as features adopted in the classification process. Our experiments used two collections of over a million product offers previously categorized by human editors and taxonomies of hundreds of categories from a real e-shopping web site. In these experiments, our method achieved an improvement of up to 9% in the quality of the categorization in comparison with the best baseline we have found.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1839-1848
    Type
    a
  14. Barbu, E.: What kind of knowledge is in Wikipedia? : unsupervised extraction of properties for similar concepts (2014) 0.00
    0.0014222699 = product of:
      0.007822484 = sum of:
        0.0057392623 = weight(_text_:a in 1547) [ClassicSimilarity], result of:
          0.0057392623 = score(doc=1547,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.18723148 = fieldWeight in 1547, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
        0.0020832212 = weight(_text_:s in 1547) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=1547,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 1547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
      0.18181819 = coord(2/11)
    
    Abstract
    This article presents a novel method for extracting knowledge from Wikipedia and a classification schema for annotating the extracted knowledge. Unlike the majority of approaches in the literature, we use the raw Wikipedia text for knowledge acquisition. The main assumption made is that the concepts classified under the same node in a taxonomy are described in a comparable way in Wikipedia. The annotation of the extracted knowledge is done at two levels: ontological and logical. The extracted properties are evaluated in the traditional way, that is, by computing the precision of the extraction procedure and in a clustering task. The second method of evaluation is seldom used in the natural language processing community, but it is regularly employed in cognitive psychology.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.12, S.2489-2497
    Type
    a
  15. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.00
    0.0014222699 = product of:
      0.007822484 = sum of:
        0.0057392623 = weight(_text_:a in 2339) [ClassicSimilarity], result of:
          0.0057392623 = score(doc=2339,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.18723148 = fieldWeight in 2339, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
        0.0020832212 = weight(_text_:s in 2339) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=2339,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
      0.18181819 = coord(2/11)
    
    Abstract
    Text classification (TC) is a core technique for text mining and information retrieval. It has been applied to many applications in many different research and industrial areas. Term-weighting schemes assign an appropriate weight to each term to obtain a high TC performance. Although term weighting is one of the important modules for TC and TC has different peculiarities from those in information retrieval, many term-weighting schemes used in information retrieval, such as term frequency-inverse document frequency (tf-idf), have been used in TC in the same manner. The peculiarity of TC that differs most from information retrieval is the existence of class information. This article proposes a new term-weighting scheme that uses class information using positive and negative class distributions. As a result, the proposed scheme, log tf-TRR, consistently performs better than do other schemes using class information as well as traditional schemes such as tf-idf.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2553-2565
    Type
    a
  16. Barthel, S.; Tönnies, S.; Balke, W.-T.: Large-scale experiments for mathematical document classification (2013) 0.00
    0.0013856413 = product of:
      0.0076210266 = sum of:
        0.0051659266 = weight(_text_:a in 1056) [ClassicSimilarity], result of:
          0.0051659266 = score(doc=1056,freq=14.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.1685276 = fieldWeight in 1056, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1056)
        0.0024550997 = weight(_text_:s in 1056) [ClassicSimilarity], result of:
          0.0024550997 = score(doc=1056,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08494043 = fieldWeight in 1056, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1056)
      0.18181819 = coord(2/11)
    
    Abstract
    The ever increasing amount of digitally available information is curse and blessing at the same time. On the one hand, users have increasingly large amounts of information at their fingertips. On the other hand, the assessment and refinement of web search results becomes more and more tiresome and difficult for non-experts in a domain. Therefore, established digital libraries offer specialized collections with a certain degree of quality. This quality can largely be attributed to the great effort invested into semantic enrichment of the provided documents e.g. by annotating their documents with respect to a domain-specific taxonomy. This process is still done manually in many domains, e.g. chemistry CAS, medicine MeSH, or mathematics MSC. But due to the growing amount of data, this manual task gets more and more time consuming and expensive. The only solution for this problem seems to employ automated classification algorithms, but from evaluations done in previous research, conclusions to a real world scenario are difficult to make. We therefore conducted a large scale feasibility study on a real world data set from one of the biggest mathematical digital libraries, i.e. Zentralblatt MATH, with special focus on its practical applicability.
    Type
    a
  17. Aphinyanaphongs, Y.; Fu, L.D.; Li, Z.; Peskin, E.R.; Efstathiadis, E.; Aliferis, C.F.; Statnikov, A.: ¬A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization (2014) 0.00
    0.0013313505 = product of:
      0.007322428 = sum of:
        0.0052392064 = weight(_text_:a in 1496) [ClassicSimilarity], result of:
          0.0052392064 = score(doc=1496,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.1709182 = fieldWeight in 1496, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1496)
        0.0020832212 = weight(_text_:s in 1496) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=1496,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 1496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1496)
      0.18181819 = coord(2/11)
    
    Abstract
    An important aspect to performing text categorization is selecting appropriate supervised classification and feature selection methods. A comprehensive benchmark is needed to inform best practices in this broad application field. Previous benchmarks have evaluated performance for a few supervised classification and feature selection methods and limited ways to optimize them. The present work updates prior benchmarks by increasing the number of classifiers and feature selection methods order of magnitude, including adding recently developed, state-of-the-art methods. Specifically, this study used 229 text categorization data sets/tasks, and evaluated 28 classification methods (both well-established and proprietary/commercial) and 19 feature selection methods according to 4 classification performance metrics. We report several key findings that will be helpful in establishing best methodological practices for text categorization.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.1964-1987
    Type
    a
  18. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.00
    0.0013197502 = product of:
      0.007258626 = sum of:
        0.0055226083 = weight(_text_:a in 2161) [ClassicSimilarity], result of:
          0.0055226083 = score(doc=2161,freq=16.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.18016359 = fieldWeight in 2161, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2161)
        0.0017360178 = weight(_text_:s in 2161) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=2161,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 2161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2161)
      0.18181819 = coord(2/11)
    
    Abstract
    Millions of micro texts are published every day on Twitter. Identifying the sentiment present in them can be helpful for measuring the frame of mind of the public, their satisfaction with respect to a product, or their support of a social event. In this context, polarity classification is a subfield of sentiment analysis focused on determining whether the content of a text is objective or subjective, and in the latter case, if it conveys a positive or a negative opinion. Most polarity detection techniques tend to take into account individual terms in the text and even some degree of linguistic knowledge, but they do not usually consider syntactic relations between words. This article explores how relating lexical, syntactic, and psychometric information can be helpful to perform polarity classification on Spanish tweets. We provide an evaluation for both shallow and deep linguistic perspectives. Empirical results show an improved performance of syntactic approaches over pure lexical models when using large training sets to create a classifier, but this tendency is reversed when small training collections are used.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1799-1816
    Type
    a
  19. Mu, T.; Goulermas, J.Y.; Korkontzelos, I.; Ananiadou, S.: Descriptive document clustering via discriminant learning in a co-embedded space of multilevel similarities (2016) 0.00
    0.0013159672 = product of:
      0.007237819 = sum of:
        0.004782719 = weight(_text_:a in 2496) [ClassicSimilarity], result of:
          0.004782719 = score(doc=2496,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.15602624 = fieldWeight in 2496, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2496)
        0.0024550997 = weight(_text_:s in 2496) [ClassicSimilarity], result of:
          0.0024550997 = score(doc=2496,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08494043 = fieldWeight in 2496, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2496)
      0.18181819 = coord(2/11)
    
    Abstract
    Descriptive document clustering aims at discovering clusters of semantically interrelated documents together with meaningful labels to summarize the content of each document cluster. In this work, we propose a novel descriptive clustering framework, referred to as CEDL. It relies on the formulation and generation of 2 types of heterogeneous objects, which correspond to documents and candidate phrases, using multilevel similarity information. CEDL is composed of 5 main processing stages. First, it simultaneously maps the documents and candidate phrases into a common co-embedded space that preserves higher-order, neighbor-based proximities between the combined sets of documents and phrases. Then, it discovers an approximate cluster structure of documents in the common space. The third stage extracts promising topic phrases by constructing a discriminant model where documents along with their cluster memberships are used as training instances. Subsequently, the final cluster labels are selected from the topic phrases using a ranking scheme using multiple scores based on the extracted co-embedding information and the discriminant output. The final stage polishes the initial clusters to reduce noise and accommodate the multitopic nature of documents. The effectiveness and competitiveness of CEDL is demonstrated qualitatively and quantitatively with experiments using document databases from different application fields.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.106-133
    Type
    a
  20. Chae, G.; Park, J.; Park, J.; Yeo, W.S.; Shi, C.: Linking and clustering artworks using social tags : revitalizing crowd-sourced information on cultural collections (2016) 0.00
    0.001185225 = product of:
      0.006518737 = sum of:
        0.004782719 = weight(_text_:a in 2852) [ClassicSimilarity], result of:
          0.004782719 = score(doc=2852,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.15602624 = fieldWeight in 2852, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2852)
        0.0017360178 = weight(_text_:s in 2852) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=2852,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 2852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2852)
      0.18181819 = coord(2/11)
    
    Abstract
    Social tagging is one of the most popular methods for collecting crowd-sourced information in galleries, libraries, archives, and museums (GLAMs). However, when the number of social tags grows rapidly, using them becomes problematic and, as a result, they are often left as simply big data that cannot be used for practical purposes. To revitalize the use of this crowd-sourced information, we propose using social tags to link and cluster artworks based on an experimental study using an online collection at the Gyeonggi Museum of Modern Art (GMoMA). We view social tagging as a folksonomy, where artworks are classified by keywords of the crowd's various interpretations and one artwork can belong to several different categories simultaneously. To leverage this strength of social tags, we used a clustering method called "link communities" to detect overlapping communities in a network of artworks constructed by computing similarities between all artwork pairs. We used this framework to identify semantic relationships and clusters of similar artworks. By comparing the clustering results with curators' manual classification results, we demonstrated the potential of social tagging data for automatically clustering artworks in a way that reflects the dynamic perspectives of crowds.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.885-899
    Type
    a

Languages

  • e 43
  • d 4

Types

  • a 43
  • el 2
  • x 2
  • s 1
  • More… Less…