Search (4040 results, page 1 of 202)

  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.28
    0.28277007 = sum of:
      0.09170738 = product of:
        0.27512214 = sum of:
          0.27512214 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.27512214 = score(doc=562,freq=2.0), product of:
              0.48952547 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.057740603 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.19106269 = sum of:
        0.14412436 = weight(_text_:klassifizieren in 562) [ClassicSimilarity], result of:
          0.14412436 = score(doc=562,freq=2.0), product of:
            0.35430822 = queryWeight, product of:
              6.1362057 = idf(docFreq=259, maxDocs=44218)
              0.057740603 = queryNorm
            0.4067768 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1362057 = idf(docFreq=259, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.046938322 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.046938322 = score(doc=562,freq=2.0), product of:
            0.20219775 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.057740603 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Theme
    Automatisches Klassifizieren
  2. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.19
    0.19106269 = product of:
      0.38212538 = sum of:
        0.38212538 = sum of:
          0.28824872 = weight(_text_:klassifizieren in 1046) [ClassicSimilarity], result of:
            0.28824872 = score(doc=1046,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.8135536 = fieldWeight in 1046, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
          0.093876645 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
            0.093876645 = score(doc=1046,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.46428138 = fieldWeight in 1046, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=1046)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
    Theme
    Automatisches Klassifizieren
  3. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.16
    0.1592189 = product of:
      0.3184378 = sum of:
        0.3184378 = sum of:
          0.24020728 = weight(_text_:klassifizieren in 611) [ClassicSimilarity], result of:
            0.24020728 = score(doc=611,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.67796135 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
          0.078230545 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
            0.078230545 = score(doc=611,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.38690117 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
    Theme
    Automatisches Klassifizieren
  4. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.16
    0.1592189 = product of:
      0.3184378 = sum of:
        0.3184378 = sum of:
          0.24020728 = weight(_text_:klassifizieren in 2748) [ClassicSimilarity], result of:
            0.24020728 = score(doc=2748,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.67796135 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.078230545 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.078230545 = score(doc=2748,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
    Theme
    Automatisches Klassifizieren
  5. Greiner, G.: Intellektuelles und automatisches Klassifizieren (1981) 0.14
    0.13588175 = product of:
      0.2717635 = sum of:
        0.2717635 = product of:
          0.543527 = sum of:
            0.543527 = weight(_text_:klassifizieren in 1103) [ClassicSimilarity], result of:
              0.543527 = score(doc=1103,freq=4.0), product of:
                0.35430822 = queryWeight, product of:
                  6.1362057 = idf(docFreq=259, maxDocs=44218)
                  0.057740603 = queryNorm
                1.5340514 = fieldWeight in 1103, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.1362057 = idf(docFreq=259, maxDocs=44218)
                  0.125 = fieldNorm(doc=1103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Automatisches Klassifizieren
  6. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.13
    0.13073216 = sum of:
      0.09943994 = product of:
        0.29831982 = sum of:
          0.29831982 = weight(_text_:themes in 266) [ClassicSimilarity], result of:
            0.29831982 = score(doc=266,freq=4.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.8036286 = fieldWeight in 266, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
        0.33333334 = coord(1/3)
      0.03129222 = product of:
        0.06258444 = sum of:
          0.06258444 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.06258444 = score(doc=266,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.30952093 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
        0.5 = coord(1/2)
    
    Abstract
    This paper reviews the enormous changes in cataloging and classification reflected in the literature of 2003 and 2004, and discusses major themes and issues. Traditional cataloging and classification tools have been re-vamped and new resources have emerged. Most notable themes are: the continuing influence of the Functional Requirements for Bibliographic Control (FRBR); the struggle to understand the ever-broadening concept of an "information entity"; steady developments in metadata-encoding standards; and the globalization of information systems, including multilinguistic challenges.
    Date
    10. 9.2000 17:38:22
  7. Gnoli, C.: Classifying phenomena : part 4: themes and rhemes (2018) 0.13
    0.12894115 = sum of:
      0.10547198 = product of:
        0.31641594 = sum of:
          0.31641594 = weight(_text_:themes in 4152) [ClassicSimilarity], result of:
            0.31641594 = score(doc=4152,freq=8.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.8523769 = fieldWeight in 4152, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.046875 = fieldNorm(doc=4152)
        0.33333334 = coord(1/3)
      0.023469161 = product of:
        0.046938322 = sum of:
          0.046938322 = weight(_text_:22 in 4152) [ClassicSimilarity], result of:
            0.046938322 = score(doc=4152,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.23214069 = fieldWeight in 4152, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4152)
        0.5 = coord(1/2)
    
    Abstract
    This is the fourth in a series of papers on classification based on phenomena instead of disciplines. Together with types, levels and facets that have been discussed in the previous parts, themes and rhemes are further structural components of such a classification. In a statement or in a longer document, a base theme and several particular themes can be identified. Base theme should be cited first in a classmark, followed by particular themes, each with its own facets. In some cases, rhemes can also be expressed, that is new information provided about a theme, converting an abstract statement ("wolves, affected by cervids") into a claim that some thing actually occurs ("wolves are affected by cervids"). In the Integrative Levels Classification rhemes can be expressed by special deictic classes, including those for actual specimens, anaphoras, unknown values, conjunctions and spans, whole universe, anthropocentric favoured classes, and favoured host classes. These features, together with rules for pronounciation, make a classification of phenomena a true language, that may be suitable for many uses.
    Date
    17. 2.2018 18:22:25
  8. Grivel, L.; Mutschke, P.; Polanco, X.: Thematic mapping on bibliographic databases by cluster analysis : a description of the SDOC environment with SOLIS (1995) 0.11
    0.11439064 = sum of:
      0.08700995 = product of:
        0.26102984 = sum of:
          0.26102984 = weight(_text_:themes in 1900) [ClassicSimilarity], result of:
            0.26102984 = score(doc=1900,freq=4.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.70317507 = fieldWeight in 1900, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1900)
        0.33333334 = coord(1/3)
      0.02738069 = product of:
        0.05476138 = sum of:
          0.05476138 = weight(_text_:22 in 1900) [ClassicSimilarity], result of:
            0.05476138 = score(doc=1900,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 1900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1900)
        0.5 = coord(1/2)
    
    Abstract
    The paper presents a coword-analysis-based system called SDOC which is able to pupport the intellectual work of an end-user who is searching for information in a bibliographic database. This is done by presenting its thematical structure as a map of keyword clusters (themes) on a graphical user interface. These mapping facilities are demonstrated on the basis of the research field Social History given by a set of documents from the social science literature database SOLIS. Besides the traditional way of analysing a coword map as a strategic diagram, the notion of cluster relationships analysis is introduced which provides an adequate interpretation of links between themes
    Source
    Knowledge organization. 22(1995) no.2, S.70-77
  9. Lin, X.; Li, J.; Zhou, X.: Theme creation for digital collections (2008) 0.11
    0.11439064 = sum of:
      0.08700995 = product of:
        0.26102984 = sum of:
          0.26102984 = weight(_text_:themes in 2635) [ClassicSimilarity], result of:
            0.26102984 = score(doc=2635,freq=4.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.70317507 = fieldWeight in 2635, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2635)
        0.33333334 = coord(1/3)
      0.02738069 = product of:
        0.05476138 = sum of:
          0.05476138 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
            0.05476138 = score(doc=2635,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 2635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2635)
        0.5 = coord(1/2)
    
    Abstract
    This paper presents an approach for integrating multiple sources of semantics for the creating metadata. A new framework is proposed to define topics and themes with both manually and automatically generated terms. The automatically generated terms include: terms from a semantic analysis of the collections and terms from previous user's queries. An interface is developed to facilitate the creation and use of such topics and themes for metadata creation. The framework and the interface promote human-computer collaboration in metadata creation. Several principles underlying such approach are also discussed.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  10. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.11
    0.11172902 = product of:
      0.22345804 = sum of:
        0.22345804 = sum of:
          0.19216582 = weight(_text_:klassifizieren in 3284) [ClassicSimilarity], result of:
            0.19216582 = score(doc=3284,freq=8.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.54236907 = fieldWeight in 3284, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.03129222 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.03129222 = score(doc=3284,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.5 = coord(1/2)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Date
    22. 1.2010 14:41:24
    Theme
    Automatisches Klassifizieren
  11. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 141) [ClassicSimilarity], result of:
            0.16814509 = score(doc=141,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 141, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
          0.05476138 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
            0.05476138 = score(doc=141,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 141, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
    Theme
    Automatisches Klassifizieren
  12. Dubin, D.: Dimensions and discriminability (1998) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 2338) [ClassicSimilarity], result of:
            0.16814509 = score(doc=2338,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
          0.05476138 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
            0.05476138 = score(doc=2338,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Theme
    Automatisches Klassifizieren
  13. Automatic classification research at OCLC (2002) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 1563) [ClassicSimilarity], result of:
            0.16814509 = score(doc=1563,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
          0.05476138 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
            0.05476138 = score(doc=1563,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
    Theme
    Automatisches Klassifizieren
  14. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 1673) [ClassicSimilarity], result of:
            0.16814509 = score(doc=1673,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.05476138 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.05476138 = score(doc=1673,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
    Theme
    Automatisches Klassifizieren
  15. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 5273) [ClassicSimilarity], result of:
            0.16814509 = score(doc=5273,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.05476138 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.05476138 = score(doc=5273,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
    Theme
    Automatisches Klassifizieren
  16. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.11
    0.111453235 = product of:
      0.22290647 = sum of:
        0.22290647 = sum of:
          0.16814509 = weight(_text_:klassifizieren in 2560) [ClassicSimilarity], result of:
            0.16814509 = score(doc=2560,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.47457293 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
          0.05476138 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
            0.05476138 = score(doc=2560,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.2708308 = fieldWeight in 2560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2560)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
    Theme
    Automatisches Klassifizieren
  17. Rooney, N.; Patterson, D.; Galushka, M.; Dobrynin, V.; Smirnova, E.: ¬An investigation into the stability of contextual document clustering (2008) 0.10
    0.103998475 = sum of:
      0.043946654 = product of:
        0.13183996 = sum of:
          0.13183996 = weight(_text_:themes in 1356) [ClassicSimilarity], result of:
            0.13183996 = score(doc=1356,freq=2.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.35515702 = fieldWeight in 1356, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1356)
        0.33333334 = coord(1/3)
      0.06005182 = product of:
        0.12010364 = sum of:
          0.12010364 = weight(_text_:klassifizieren in 1356) [ClassicSimilarity], result of:
            0.12010364 = score(doc=1356,freq=2.0), product of:
              0.35430822 = queryWeight, product of:
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.057740603 = queryNorm
              0.33898067 = fieldWeight in 1356, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.1362057 = idf(docFreq=259, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1356)
        0.5 = coord(1/2)
    
    Abstract
    In this article, we assess the effectiveness of Contextual Document Clustering (CDC) as a means of indexing within a dynamic and rapidly changing environment. We simulate a dynamic environment, by splitting two chronologically ordered datasets into time-ordered segments and assessing how the technique performs under two different scenarios. The first is when new documents are added incrementally without reclustering [incremental CDC (iCDC)], and the second is when reclustering is performed [nonincremental CDC (nCDC)]. The datasets are very large, are independent of each other, and belong to two very different domains. We show that CDC itself is effective at clustering very large document corpora, and that, significantly, it lends itself to a very simple, efficient incremental document addition process that is seen to be very stable over time despite the size of the corpus growing considerably. It was seen to be effective at incrementally clustering new documents even when the corpus grew to six times its original size. This is in contrast to what other researchers have found when applying similar simple incremental approaches to document clustering. The stability of iCDC is accounted for by the unique manner in which CDC discovers cluster themes.
    Theme
    Automatisches Klassifizieren
  18. Davies, C.: Future user issues for the networked multimedia electronic library (1998) 0.10
    0.101606876 = sum of:
      0.07031465 = product of:
        0.21094395 = sum of:
          0.21094395 = weight(_text_:themes in 1405) [ClassicSimilarity], result of:
            0.21094395 = score(doc=1405,freq=2.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.56825125 = fieldWeight in 1405, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0625 = fieldNorm(doc=1405)
        0.33333334 = coord(1/3)
      0.03129222 = product of:
        0.06258444 = sum of:
          0.06258444 = weight(_text_:22 in 1405) [ClassicSimilarity], result of:
            0.06258444 = score(doc=1405,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.30952093 = fieldWeight in 1405, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1405)
        0.5 = coord(1/2)
    
    Abstract
    Focuses on 2 main themes: the diversification of the electronic library to encompass different material types anf formats, creating issues of integration as well as cataloguing and navigation; and the phenomenal growth of the WWW since the start of the ELINOR project, compelling most new projects to include an interface to the Web to some degree
    Series
    British Library Research and Innovation Centre (BLRIC) report; 22
  19. Woodhouse, S.: 'Dewey adapts to the world, the worlds adapt Dewey' : Strategic development of the classification into the millennium (1997) 0.10
    0.101606876 = sum of:
      0.07031465 = product of:
        0.21094395 = sum of:
          0.21094395 = weight(_text_:themes in 1810) [ClassicSimilarity], result of:
            0.21094395 = score(doc=1810,freq=2.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.56825125 = fieldWeight in 1810, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0625 = fieldNorm(doc=1810)
        0.33333334 = coord(1/3)
      0.03129222 = product of:
        0.06258444 = sum of:
          0.06258444 = weight(_text_:22 in 1810) [ClassicSimilarity], result of:
            0.06258444 = score(doc=1810,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.30952093 = fieldWeight in 1810, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1810)
        0.5 = coord(1/2)
    
    Abstract
    Reports on the Dewey Classification Editorial Policy Committee, Spring meeting 1997 which aimed to agree policies for the development of the classification over the next decade and put together a strategic plan to implement it. Details: themes for the future, the concept of edition, editorial policy on the relative index, manual, schedule development, and ways to determine areas for revision
    Date
    7. 8.1998 19:22:16
  20. Moore, N.: ¬The British national information strategy (1998) 0.10
    0.101606876 = sum of:
      0.07031465 = product of:
        0.21094395 = sum of:
          0.21094395 = weight(_text_:themes in 3036) [ClassicSimilarity], result of:
            0.21094395 = score(doc=3036,freq=2.0), product of:
              0.371216 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.057740603 = queryNorm
              0.56825125 = fieldWeight in 3036, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0625 = fieldNorm(doc=3036)
        0.33333334 = coord(1/3)
      0.03129222 = product of:
        0.06258444 = sum of:
          0.06258444 = weight(_text_:22 in 3036) [ClassicSimilarity], result of:
            0.06258444 = score(doc=3036,freq=2.0), product of:
              0.20219775 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057740603 = queryNorm
              0.30952093 = fieldWeight in 3036, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3036)
        0.5 = coord(1/2)
    
    Abstract
    The UK has not followed other countries in developing framworks of policies to guide their transition into information societies in a consistent and systematic way. Analyzes the current UK policies using a matrix which identifies 3 levels of policy (industrial, organization and social) and 4 cross cutting themes (information technology, information markets, human resources and legislation and regulation). Concludes that together, these various initiatives add up to a national strategy but it is one that lacks coordination and cohesion
    Date
    22. 2.1999 17:03:18

Languages

Types

  • a 3359
  • m 379
  • el 202
  • s 152
  • x 45
  • b 39
  • i 24
  • r 23
  • ? 8
  • d 4
  • p 4
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications