Search (60 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08417642 = sum of:
      0.06276117 = product of:
        0.2510447 = sum of:
          0.2510447 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2510447 = score(doc=562,freq=2.0), product of:
              0.44668442 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052687407 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021415249 = product of:
        0.042830497 = sum of:
          0.042830497 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042830497 = score(doc=562,freq=2.0), product of:
              0.18450232 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052687407 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.04
    0.042769507 = product of:
      0.08553901 = sum of:
        0.08553901 = sum of:
          0.0355701 = weight(_text_:h in 141) [ClassicSimilarity], result of:
            0.0355701 = score(doc=141,freq=4.0), product of:
              0.13089918 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.052687407 = queryNorm
              0.27173662 = fieldWeight in 141, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
          0.049968913 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
            0.049968913 = score(doc=141,freq=2.0), product of:
              0.18450232 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052687407 = queryNorm
              0.2708308 = fieldWeight in 141, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  3. Yao, H.; Etzkorn, L.H.; Virani, S.: Automated classification and retrieval of reusable software components (2008) 0.03
    0.030369353 = sum of:
      0.021386545 = product of:
        0.08554618 = sum of:
          0.08554618 = weight(_text_:authors in 1382) [ClassicSimilarity], result of:
            0.08554618 = score(doc=1382,freq=4.0), product of:
              0.24019209 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052687407 = queryNorm
              0.35615736 = fieldWeight in 1382, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1382)
        0.25 = coord(1/4)
      0.008982807 = product of:
        0.017965615 = sum of:
          0.017965615 = weight(_text_:h in 1382) [ClassicSimilarity], result of:
            0.017965615 = score(doc=1382,freq=2.0), product of:
              0.13089918 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.052687407 = queryNorm
              0.13724773 = fieldWeight in 1382, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1382)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe their research which improves software reuse by using an automated approach to semantically search for and retrieve reusable software components in large software component repositories and on the World Wide Web (WWW). Using automation and smart (semantic) techniques, their approach speeds up the search and retrieval of reusable software components, while retaining good accuracy, and therefore improves the affordability of software reuse. A program understanding of software components and natural language understanding of user queries was employed. Then the software component descriptions were compared by matching the resulting semantic representations of the user queries to the semantic representations of the software components to search for software components that best match the user queries. A proof of concept system was developed to test the authors' approach. The results of this proof of concept system were compared to human experts, and statistical analysis was performed on the collected experimental data. The results from these experiments demonstrate that this automated semantic-based approach for software reusable component classification and retrieval is successful when compared to the labor-intensive results from the experts, thus showing that this approach can significantly benefit software reuse classification and retrieval.
  4. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.03
    0.026374888 = sum of:
      0.012098055 = product of:
        0.04839222 = sum of:
          0.04839222 = weight(_text_:authors in 2741) [ClassicSimilarity], result of:
            0.04839222 = score(doc=2741,freq=2.0), product of:
              0.24019209 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052687407 = queryNorm
              0.20147301 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
        0.25 = coord(1/4)
      0.014276832 = product of:
        0.028553665 = sum of:
          0.028553665 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.028553665 = score(doc=2741,freq=2.0), product of:
              0.18450232 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052687407 = queryNorm
              0.15476047 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
        0.5 = coord(1/2)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
  5. HaCohen-Kerner, Y.; Beck, H.; Yehudai, E.; Rosenstein, M.; Mughaz, D.: Cuisine : classification using stylistic feature sets and/or name-based feature sets (2010) 0.02
    0.024105377 = sum of:
      0.01512257 = product of:
        0.06049028 = sum of:
          0.06049028 = weight(_text_:authors in 3706) [ClassicSimilarity], result of:
            0.06049028 = score(doc=3706,freq=2.0), product of:
              0.24019209 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052687407 = queryNorm
              0.25184128 = fieldWeight in 3706, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3706)
        0.25 = coord(1/4)
      0.008982807 = product of:
        0.017965615 = sum of:
          0.017965615 = weight(_text_:h in 3706) [ClassicSimilarity], result of:
            0.017965615 = score(doc=3706,freq=2.0), product of:
              0.13089918 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.052687407 = queryNorm
              0.13724773 = fieldWeight in 3706, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3706)
        0.5 = coord(1/2)
    
    Abstract
    Document classification presents challenges due to the large number of features, their dependencies, and the large number of training documents. In this research, we investigated the use of six stylistic feature sets (including 42 features) and/or six name-based feature sets (including 234 features) for various combinations of the following classification tasks: ethnic groups of the authors and/or periods of time when the documents were written and/or places where the documents were written. The investigated corpus contains Jewish Law articles written in Hebrew-Aramaic, which present interesting problems for classification. Our system CUISINE (Classification UsIng Stylistic feature sets and/or NamE-based feature sets) achieves accuracy results between 90.71 to 98.99% for the seven classification experiments (ethnicity, time, place, ethnicity&time, ethnicity&place, time&place, ethnicity&time&place). For the first six tasks, the stylistic feature sets in general and the quantitative feature set in particular are enough for excellent classification results. In contrast, the name-based feature sets are rather poor for these tasks. However, for the most complex task (ethnicity&time&place), a hill-climbing model using all feature sets succeeds in significantly improving the classification results. Most of the stylistic features (34 of 42) are language-independent and domain-independent. These features might be useful to the community at large, at least for rather simple tasks.
  6. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.02
    0.024105377 = sum of:
      0.01512257 = product of:
        0.06049028 = sum of:
          0.06049028 = weight(_text_:authors in 2836) [ClassicSimilarity], result of:
            0.06049028 = score(doc=2836,freq=2.0), product of:
              0.24019209 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052687407 = queryNorm
              0.25184128 = fieldWeight in 2836, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2836)
        0.25 = coord(1/4)
      0.008982807 = product of:
        0.017965615 = sum of:
          0.017965615 = weight(_text_:h in 2836) [ClassicSimilarity], result of:
            0.017965615 = score(doc=2836,freq=2.0), product of:
              0.13089918 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.052687407 = queryNorm
              0.13724773 = fieldWeight in 2836, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2836)
        0.5 = coord(1/2)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.021415249 = product of:
      0.042830497 = sum of:
        0.042830497 = product of:
          0.085660994 = sum of:
            0.085660994 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.085660994 = score(doc=1046,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  8. Bock, H.-H.: Automatische Klassifikation : theoretische und praktische Methoden zur Gruppierung und Strukturierung von Daten (Cluster-Analyse) (1974) 0.02
    0.020325772 = product of:
      0.040651545 = sum of:
        0.040651545 = product of:
          0.08130309 = sum of:
            0.08130309 = weight(_text_:h in 7693) [ClassicSimilarity], result of:
              0.08130309 = score(doc=7693,freq=4.0), product of:
                0.13089918 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.052687407 = queryNorm
                0.6211123 = fieldWeight in 7693, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.125 = fieldNorm(doc=7693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.01784604 = product of:
      0.03569208 = sum of:
        0.03569208 = product of:
          0.07138416 = sum of:
            0.07138416 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.07138416 = score(doc=611,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  10. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.01784604 = product of:
      0.03569208 = sum of:
        0.03569208 = product of:
          0.07138416 = sum of:
            0.07138416 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.07138416 = score(doc=2748,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  11. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.01
    0.01497058 = product of:
      0.02994116 = sum of:
        0.02994116 = product of:
          0.11976464 = sum of:
            0.11976464 = weight(_text_:authors in 724) [ClassicSimilarity], result of:
              0.11976464 = score(doc=724,freq=4.0), product of:
                0.24019209 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052687407 = queryNorm
                0.49862027 = fieldWeight in 724, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=724)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The Wikidata gadget, CCLitBox, for the automated classification of literary authors and works by a faceted classification and using Linked Open Data (LOD) is presented. The tool reproduces the classification algorithm of class O Literature of the Colon Classification and uses data freely available in Wikidata to create Colon Classification class numbers. CCLitBox is totally free and enables any user to classify literary authors and their works; it is easily accessible to everybody; it uses LOD from Wikidata but missing data for classification can be freely added if necessary; it is readymade for any cooperative and networked project.
  12. Kleinoeder, H.H.; Puzicha, J.: Automatische Katalogisierung am Beispiel einer Pilotanwendung (2002) 0.01
    0.012575929 = product of:
      0.025151858 = sum of:
        0.025151858 = product of:
          0.050303716 = sum of:
            0.050303716 = weight(_text_:h in 1154) [ClassicSimilarity], result of:
              0.050303716 = score(doc=1154,freq=2.0), product of:
                0.13089918 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.052687407 = queryNorm
                0.38429362 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Info 7. 17(2002) H.1, S.19-21
  13. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.012492228 = product of:
      0.024984457 = sum of:
        0.024984457 = product of:
          0.049968913 = sum of:
            0.049968913 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.049968913 = score(doc=2338,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  14. Automatic classification research at OCLC (2002) 0.01
    0.012492228 = product of:
      0.024984457 = sum of:
        0.024984457 = product of:
          0.049968913 = sum of:
            0.049968913 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.049968913 = score(doc=1563,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  15. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012492228 = product of:
      0.024984457 = sum of:
        0.024984457 = product of:
          0.049968913 = sum of:
            0.049968913 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.049968913 = score(doc=1673,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  16. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.012492228 = product of:
      0.024984457 = sum of:
        0.024984457 = product of:
          0.049968913 = sum of:
            0.049968913 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.049968913 = score(doc=5273,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  17. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.012492228 = product of:
      0.024984457 = sum of:
        0.024984457 = product of:
          0.049968913 = sum of:
            0.049968913 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.049968913 = score(doc=2560,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  18. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.01
    0.012098055 = product of:
      0.02419611 = sum of:
        0.02419611 = product of:
          0.09678444 = sum of:
            0.09678444 = weight(_text_:authors in 4088) [ClassicSimilarity], result of:
              0.09678444 = score(doc=4088,freq=2.0), product of:
                0.24019209 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052687407 = queryNorm
                0.40294603 = fieldWeight in 4088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4088)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  19. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010707624 = product of:
      0.021415249 = sum of:
        0.021415249 = product of:
          0.042830497 = sum of:
            0.042830497 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.042830497 = score(doc=2760,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  20. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010707624 = product of:
      0.021415249 = sum of:
        0.021415249 = product of:
          0.042830497 = sum of:
            0.042830497 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.042830497 = score(doc=3051,freq=2.0), product of:
                0.18450232 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052687407 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28

Languages

  • e 38
  • d 22

Types

  • a 52
  • el 8
  • m 2
  • r 2
  • More… Less…