Search (14 results, page 1 of 1)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  • × year_i:[2000 TO 2010}
  1. Foskett, D.J.: Facet analysis (2009) 0.02
    0.015728682 = product of:
      0.11010077 = sum of:
        0.11010077 = weight(_text_:great in 3754) [ClassicSimilarity], result of:
          0.11010077 = score(doc=3754,freq=2.0), product of:
            0.22122125 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.039287858 = queryNorm
            0.49769527 = fieldWeight in 3754, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3754)
      0.14285715 = coord(1/7)
    
    Abstract
    The brothers Foskett, Anthony and Douglas, have both made major contributions to the theory and practice of subject analysis and description. Here, Douglas Foskett explains facet analysis, a vital technique in the development of both classification schemes and thesauri. Foskett himself created faceted classification schemes for specific disciplines, drawing from the philosophy of the great Indian classificationist, S.R. Ranganathan.
  2. Broughton, V.: Essential classification (2004) 0.01
    0.014865793 = product of:
      0.052030273 = sum of:
        0.0389265 = weight(_text_:great in 2824) [ClassicSimilarity], result of:
          0.0389265 = score(doc=2824,freq=4.0), product of:
            0.22122125 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.039287858 = queryNorm
            0.17596185 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.013103772 = product of:
          0.026207544 = sum of:
            0.026207544 = weight(_text_:bibliography in 2824) [ClassicSimilarity], result of:
              0.026207544 = score(doc=2824,freq=2.0), product of:
                0.21586132 = queryWeight, product of:
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.039287858 = queryNorm
                0.12140917 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  3. Paling, S.: Classification, rhetoric, and the classificatory horizon (2004) 0.01
    0.0113482 = product of:
      0.0794374 = sum of:
        0.0794374 = product of:
          0.1588748 = sum of:
            0.1588748 = weight(_text_:bibliography in 836) [ClassicSimilarity], result of:
              0.1588748 = score(doc=836,freq=6.0), product of:
                0.21586132 = queryWeight, product of:
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.039287858 = queryNorm
                0.736004 = fieldWeight in 836, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=836)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Bibliography provides a compelling vantage from which to study the interconnection of classification, rhetoric, and the making of knowledge. Bibliography, and the related activities of classification and retrieval, bears a direct relationship to textual studies and rhetoric. The paper examines this relationship by briefly tracing the development of bibliography forward into issues concomitant with the emergence of classification for retrieval. A striking similarity to problems raised in rhetoric and which spring from common concerns and intellectual sources is demonstrated around Gadamer's notion of intellectual horizon. Classification takes place within a horizon of material conditions and social constraints that are best viewed through a hermeneutic or deconstructive lens, termed the "classificatory horizon."
  4. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.01
    0.010529564 = product of:
      0.07370694 = sum of:
        0.07370694 = sum of:
          0.052415088 = weight(_text_:bibliography in 2763) [ClassicSimilarity], result of:
            0.052415088 = score(doc=2763,freq=2.0), product of:
              0.21586132 = queryWeight, product of:
                5.494352 = idf(docFreq=493, maxDocs=44218)
                0.039287858 = queryNorm
              0.24281834 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.494352 = idf(docFreq=493, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
          0.021291848 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
            0.021291848 = score(doc=2763,freq=2.0), product of:
              0.13757938 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039287858 = queryNorm
              0.15476047 = fieldWeight in 2763, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2763)
      0.14285715 = coord(1/7)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  5. Szostak, R.: Classification, interdisciplinarity, and the study of science (2008) 0.01
    0.009830426 = product of:
      0.06881298 = sum of:
        0.06881298 = weight(_text_:great in 1893) [ClassicSimilarity], result of:
          0.06881298 = score(doc=1893,freq=2.0), product of:
            0.22122125 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.039287858 = queryNorm
            0.31105953 = fieldWeight in 1893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1893)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - This paper aims to respond to the 2005 paper by Hjørland and Nissen Pedersen by suggesting that an exhaustive and universal classification of the phenomena that scholars study, and the methods and theories they apply, is feasible. It seeks to argue that such a classification is critical for interdisciplinary scholarship. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Hjørland and Nissen Pedersen as its starting point. Hjørland and Nissen Pedersen had identified several difficulties that would be encountered in developing such a classification; the paper suggests how each of these can be overcome. It also urges a deductive approach as complementary to the inductive approach recommended by Hjørland and Nissen Pedersen. Findings - The paper finds that an exhaustive and universal classification of scholarly documents in terms of (at least) the phenomena that scholars study, and the theories and methods they apply, appears to be both possible and desirable. Practical implications - The paper suggests how such a project can be begun. In particular it stresses the importance of classifying documents in terms of causal links between phenomena. Originality/value - The paper links the information science, interdisciplinary, and study of science literatures, and suggests that the types of classification outlined above would be of great value to scientists/scholars, and that they are possible.
  6. McIlwaine, I.C.: Where have all the flowers gone? : An investigation into the fate of some special classification schemes (2003) 0.01
    0.007864341 = product of:
      0.055050384 = sum of:
        0.055050384 = weight(_text_:great in 2764) [ClassicSimilarity], result of:
          0.055050384 = score(doc=2764,freq=2.0), product of:
            0.22122125 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.039287858 = queryNorm
            0.24884763 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2764)
      0.14285715 = coord(1/7)
    
    Abstract
    Prior to the OPAC many institutions devised classifications to suit their special needs. Others expanded or altered general schemes to accommodate specific approaches. A driving force in the creation of these classifications was the Classification Research Group, celebrating its golden jubilee in 2002, whose work created a framework and body of principles that remain valid for the retrieval needs of today. The paper highlights some of these special schemes and highlights the fundamental principles which remain valid. 1. Introduction The distinction between a general and a special classification scheme is made frequently in the textbooks, but is one that it is sometimes difficult to draw. The Library of Congress classification could be described as the special classification par excellence. Normally, however, a special classification is taken to be one that is restricted to a specific subject, and quite often used in one specific context only, either a library or a bibliographic listing or for a specific purpose such as a search engine and it is in this sense that I propose to examine some of these schemes. Today, there is a widespread preference for searching an words as a supplement to the use of a standard system, usually the Dewey Decimal Classification (DDC). This is enhanced by the ability to search documents full-text in a computerized environment, a situation that did not exist 20 or 30 years ago. Today's situation is a great improvement in many ways, but it does depend upon the words used by the author and the searcher corresponding, and often presupposes the use of English. In libraries, the use of co-operative services and precatalogued records already provided with classification data has also spelt the demise of the special scheme. In many instances, the survival of a special classification depends upon its creaior and, with the passage of time, this becomes inevitably more precarious.
  7. Gnoli, C.; Mei, H.: Freely faceted classification for Web-based information retrieval (2006) 0.01
    0.0056159026 = product of:
      0.039311316 = sum of:
        0.039311316 = product of:
          0.07862263 = sum of:
            0.07862263 = weight(_text_:bibliography in 534) [ClassicSimilarity], result of:
              0.07862263 = score(doc=534,freq=2.0), product of:
                0.21586132 = queryWeight, product of:
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.039287858 = queryNorm
                0.3642275 = fieldWeight in 534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.046875 = fieldNorm(doc=534)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    In free classification, each concept is expressed by a constant notation, and classmarks are formed by free combinations of them, allowing the retrieval of records from a database by searching any of the component concepts. A refinement of free classification is freely faceted classification, where notation can include facets, expressing the kind of relations held between the concepts. The Integrative Level Classification project aims at testing free and freely faceted classification by applying them to small bibliographical samples in various domains. A sample, called the Dandelion Bibliography of Facet Analysis, is described here. Experience was gained using this system to classify 300 specialized papers dealing with facet analysis itself recorded on a MySQL database and building a Web interface exploiting freely faceted notation. The interface is written in PHP and uses string functions to process the queries and to yield relevant results selected and ordered according to the principles of integrative levels.
  8. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.01
    0.0052323085 = product of:
      0.036626156 = sum of:
        0.036626156 = product of:
          0.07325231 = sum of:
            0.07325231 = weight(_text_:bibliography in 2467) [ClassicSimilarity], result of:
              0.07325231 = score(doc=2467,freq=10.0), product of:
                0.21586132 = queryWeight, product of:
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.039287858 = queryNorm
                0.33934894 = fieldWeight in 2467, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  9. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.00
    0.0030416928 = product of:
      0.021291848 = sum of:
        0.021291848 = product of:
          0.042583697 = sum of:
            0.042583697 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.042583697 = score(doc=5083,freq=2.0), product of:
                0.13757938 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039287858 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    27. 5.2007 22:19:35
  10. Olson, H.A.: Sameness and difference : a cultural foundation of classification (2001) 0.00
    0.002661481 = product of:
      0.018630367 = sum of:
        0.018630367 = product of:
          0.037260734 = sum of:
            0.037260734 = weight(_text_:22 in 166) [ClassicSimilarity], result of:
              0.037260734 = score(doc=166,freq=2.0), product of:
                0.13757938 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039287858 = queryNorm
                0.2708308 = fieldWeight in 166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=166)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    10. 9.2000 17:38:22
  11. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    0.0023399591 = product of:
      0.016379714 = sum of:
        0.016379714 = product of:
          0.032759428 = sum of:
            0.032759428 = weight(_text_:bibliography in 3262) [ClassicSimilarity], result of:
              0.032759428 = score(doc=3262,freq=2.0), product of:
                0.21586132 = queryWeight, product of:
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.039287858 = queryNorm
                0.15176146 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.494352 = idf(docFreq=493, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
  12. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.00
    0.0022812698 = product of:
      0.015968887 = sum of:
        0.015968887 = product of:
          0.031937774 = sum of:
            0.031937774 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.031937774 = score(doc=780,freq=2.0), product of:
                0.13757938 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039287858 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22.12.2007 17:22:31
  13. Beghtol, C.: Naïve classification systems and the global information society (2004) 0.00
    0.001901058 = product of:
      0.013307406 = sum of:
        0.013307406 = product of:
          0.026614811 = sum of:
            0.026614811 = weight(_text_:22 in 3483) [ClassicSimilarity], result of:
              0.026614811 = score(doc=3483,freq=2.0), product of:
                0.13757938 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039287858 = queryNorm
                0.19345059 = fieldWeight in 3483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3483)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Pages
    S.19-22
  14. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.00
    0.0015208464 = product of:
      0.010645924 = sum of:
        0.010645924 = product of:
          0.021291848 = sum of:
            0.021291848 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
              0.021291848 = score(doc=2346,freq=2.0), product of:
                0.13757938 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039287858 = queryNorm
                0.15476047 = fieldWeight in 2346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2346)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    7.11.2008 15:22:04