Search (61 results, page 1 of 4)

  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  1. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.10
    0.1027452 = product of:
      0.23973879 = sum of:
        0.03350689 = weight(_text_:world in 726) [ClassicSimilarity], result of:
          0.03350689 = score(doc=726,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.04452437 = weight(_text_:wide in 726) [ClassicSimilarity], result of:
          0.04452437 = score(doc=726,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.04183814 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.04183814 = score(doc=726,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.03350689 = weight(_text_:world in 726) [ClassicSimilarity], result of:
          0.03350689 = score(doc=726,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.04452437 = weight(_text_:wide in 726) [ClassicSimilarity], result of:
          0.04452437 = score(doc=726,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
        0.04183814 = weight(_text_:web in 726) [ClassicSimilarity], result of:
          0.04183814 = score(doc=726,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.37471575 = fieldWeight in 726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=726)
      0.42857143 = coord(6/14)
    
    Abstract
    This article gives a cheerfully brief and undetailed account of how to make a faceted classification system, then describes information retrieval and searching on the web. It concludes by saying that facets would be excellent in helping users search and browse the web, but offers no real clues as to how this can be done.
  2. Beghtol, C.: General classification systems : structural principles for multidisciplinary specification (1998) 0.09
    0.087588444 = product of:
      0.20437303 = sum of:
        0.03350689 = weight(_text_:world in 44) [ClassicSimilarity], result of:
          0.03350689 = score(doc=44,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.04452437 = weight(_text_:wide in 44) [ClassicSimilarity], result of:
          0.04452437 = score(doc=44,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.02415526 = weight(_text_:web in 44) [ClassicSimilarity], result of:
          0.02415526 = score(doc=44,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.03350689 = weight(_text_:world in 44) [ClassicSimilarity], result of:
          0.03350689 = score(doc=44,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.04452437 = weight(_text_:wide in 44) [ClassicSimilarity], result of:
          0.04452437 = score(doc=44,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.02415526 = weight(_text_:web in 44) [ClassicSimilarity], result of:
          0.02415526 = score(doc=44,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 44, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
      0.42857143 = coord(6/14)
    
    Abstract
    In this century, knowledge creation, production, dissemination and use have changed profoundly. Intellectual and physical barriers have been substantially reduced by the rise of multidisciplinarity and by the influence of computerization, particularly by the spread of the World Wide Web (WWW). Bibliographic classification systems need to respond to this situation. Three possible strategic responses are described: 1) adopting an existing system; 2) adapting an existing system; and 3) finding new structural principles for classification systems. Examples of these three responses are given. An extended example of the third option uses the knowledge outline in the Spectrum of Britannica Online to suggest a theory of "viewpoint warrant" that could be used to incorporate differing perspectives into general classification systems
  3. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.06
    0.05775269 = product of:
      0.13475628 = sum of:
        0.013961203 = weight(_text_:world in 2467) [ClassicSimilarity], result of:
          0.013961203 = score(doc=2467,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.018551823 = weight(_text_:wide in 2467) [ClassicSimilarity], result of:
          0.018551823 = score(doc=2467,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.122383565 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.034865115 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.034865115 = score(doc=2467,freq=24.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.013961203 = weight(_text_:world in 2467) [ClassicSimilarity], result of:
          0.013961203 = score(doc=2467,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.018551823 = weight(_text_:wide in 2467) [ClassicSimilarity], result of:
          0.018551823 = score(doc=2467,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.122383565 = fieldWeight in 2467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.034865115 = weight(_text_:web in 2467) [ClassicSimilarity], result of:
          0.034865115 = score(doc=2467,freq=24.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 2467, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.42857143 = coord(6/14)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  4. Broughton, V.: Essential classification (2004) 0.04
    0.03620436 = product of:
      0.08447684 = sum of:
        0.01934521 = weight(_text_:world in 2824) [ClassicSimilarity], result of:
          0.01934521 = score(doc=2824,freq=6.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.14710988 = fieldWeight in 2824, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.014841458 = weight(_text_:wide in 2824) [ClassicSimilarity], result of:
          0.014841458 = score(doc=2824,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.09790685 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.008051753 = weight(_text_:web in 2824) [ClassicSimilarity], result of:
          0.008051753 = score(doc=2824,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.07211407 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.01934521 = weight(_text_:world in 2824) [ClassicSimilarity], result of:
          0.01934521 = score(doc=2824,freq=6.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.14710988 = fieldWeight in 2824, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.014841458 = weight(_text_:wide in 2824) [ClassicSimilarity], result of:
          0.014841458 = score(doc=2824,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.09790685 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.008051753 = weight(_text_:web in 2824) [ClassicSimilarity], result of:
          0.008051753 = score(doc=2824,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.07211407 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
      0.42857143 = coord(6/14)
    
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  5. Gnoli, C.: Metadata about what? : distinguishing between ontic, epistemic, and documental dimensions in knowledge organization (2012) 0.03
    0.034067217 = product of:
      0.119235255 = sum of:
        0.039488245 = weight(_text_:world in 323) [ClassicSimilarity], result of:
          0.039488245 = score(doc=323,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.020129383 = weight(_text_:web in 323) [ClassicSimilarity], result of:
          0.020129383 = score(doc=323,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.039488245 = weight(_text_:world in 323) [ClassicSimilarity], result of:
          0.039488245 = score(doc=323,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 323, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
        0.020129383 = weight(_text_:web in 323) [ClassicSimilarity], result of:
          0.020129383 = score(doc=323,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=323)
      0.2857143 = coord(4/14)
    
    Abstract
    The spread of many new media and formats is changing the scenario faced by knowledge organizers: as printed monographs are not the only standard form of knowledge carrier anymore, the traditional kind of knowledge organization (KO) systems based on academic disciplines is put into question. A sounder foundation can be provided by an analysis of the different dimensions concurring to form the content of any knowledge item-what Brian Vickery described as the steps "from the world to the classifier." The ultimate referents of documents are the phenomena of the real world, that can be ordered by ontology, the study of what exists. Phenomena coexist in subjects with the perspectives by which they are considered, pertaining to epistemology, and with the formal features of knowledge carriers, adding a further, pragmatic layer. All these dimensions can be accounted for in metadata, but are often done so in mixed ways, making indexes less rigorous and interoperable. For example, while facet analysis was originally developed for subject indexing, many "faceted" interfaces today mix subject facets with form facets, and schemes presented as "ontologies" for the "semantic Web" also code for non-semantic information. In bibliographic classifications, phenomena are often confused with the disciplines dealing with them, the latter being assumed to be the most useful starting point, for users will have either one or another perspective. A general citation order of dimensions- phenomena, perspective, carrier-is recommended, helping to concentrate most relevant information at the beginning of headings.
  6. Keshet, Y.: Classification systems in the light of sociology of knowledge (2011) 0.03
    0.027458167 = product of:
      0.09610358 = sum of:
        0.027922407 = weight(_text_:world in 4493) [ClassicSimilarity], result of:
          0.027922407 = score(doc=4493,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 4493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
        0.020129383 = weight(_text_:web in 4493) [ClassicSimilarity], result of:
          0.020129383 = score(doc=4493,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 4493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
        0.027922407 = weight(_text_:world in 4493) [ClassicSimilarity], result of:
          0.027922407 = score(doc=4493,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 4493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
        0.020129383 = weight(_text_:web in 4493) [ClassicSimilarity], result of:
          0.020129383 = score(doc=4493,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 4493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4493)
      0.2857143 = coord(4/14)
    
    Abstract
    Purpose - Classification is an important process in making sense of the world, and has a pronounced social dimension. This paper aims to compare folksonomy, a new social classification system currently being developed on the web, with conventional taxonomy in the light of theoretical sociological and anthropological approaches. The co-existence of these two types of classification system raises the questions: Will and should taxonomies be hybridized with folksonomies? What can each of these systems contribute to information-searching processes, and how can the sociology of knowledge provide an answer to these questions? This paper aims also to address these issues. Design/methodology/approach - This paper is situated at the meeting point of the sociology of knowledge, epistemology and information science and aims at examining systems of classification in the light of both classical theory and current late-modern sociological and anthropological approaches. Findings - Using theoretical approaches current in the sociology of science and knowledge, the paper envisages two divergent possible outcomes. Originality/value - While concentrating on classifications systems, this paper addresses the more general social issue of what we know and how it is known. The concept of hybrid knowledge is suggested in order to illuminate the epistemological basis of late-modern knowledge being constructed by hybridizing contradictory modern knowledge categories, such as the subjective with the objective and the social with the natural. Integrating tree-like taxonomies with folksonomies or, in other words, generating a naturalized structural order of objective relations with social, subjective classification systems, can create a vast range of hybrid knowledge.
  7. Belayche, C.: ¬A propos de la classification de Dewey (1997) 0.02
    0.023119932 = product of:
      0.10789301 = sum of:
        0.04467585 = weight(_text_:world in 1171) [ClassicSimilarity], result of:
          0.04467585 = score(doc=1171,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.33973572 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=1171)
        0.04467585 = weight(_text_:world in 1171) [ClassicSimilarity], result of:
          0.04467585 = score(doc=1171,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.33973572 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=1171)
        0.018541312 = product of:
          0.037082624 = sum of:
            0.037082624 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.037082624 = score(doc=1171,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.30952093 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    All classifications are based on ideologies and Dewey is marked by its author's origins in 19th century North America. Subsequent revisions indicate changed ways of understanding the world. Section 157 (psycho-pathology) is now included with 616.89 (mental troubles), reflecting the move to a genetic-based approach. Table 5 (racial, ethnic and national groups) is however unchanged, despite changing views on such categorisation
    Source
    Bulletin d'informations de l'Association des Bibliothecaires Francais. 1997, no.175, S.22-23
  8. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.02
    0.022294646 = product of:
      0.07803126 = sum of:
        0.016753444 = weight(_text_:world in 3651) [ClassicSimilarity], result of:
          0.016753444 = score(doc=3651,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.12740089 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.022262186 = weight(_text_:wide in 3651) [ClassicSimilarity], result of:
          0.022262186 = score(doc=3651,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.14686027 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.016753444 = weight(_text_:world in 3651) [ClassicSimilarity], result of:
          0.016753444 = score(doc=3651,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.12740089 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.022262186 = weight(_text_:wide in 3651) [ClassicSimilarity], result of:
          0.022262186 = score(doc=3651,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.14686027 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.2857143 = coord(4/14)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
  9. Mai, J.E.: Classification of the Web : challenges and inquiries (2004) 0.02
    0.018404007 = product of:
      0.12882805 = sum of:
        0.064414024 = weight(_text_:web in 3075) [ClassicSimilarity], result of:
          0.064414024 = score(doc=3075,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5769126 = fieldWeight in 3075, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3075)
        0.064414024 = weight(_text_:web in 3075) [ClassicSimilarity], result of:
          0.064414024 = score(doc=3075,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5769126 = fieldWeight in 3075, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3075)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges that call for inquiries into the theoretical foundation of bibliographic classification theory.
  10. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.02
    0.017776145 = product of:
      0.08295534 = sum of:
        0.032207012 = weight(_text_:web in 5083) [ClassicSimilarity], result of:
          0.032207012 = score(doc=5083,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.2884563 = fieldWeight in 5083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5083)
        0.032207012 = weight(_text_:web in 5083) [ClassicSimilarity], result of:
          0.032207012 = score(doc=5083,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.2884563 = fieldWeight in 5083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5083)
        0.018541312 = product of:
          0.037082624 = sum of:
            0.037082624 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
              0.037082624 = score(doc=5083,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.30952093 = fieldWeight in 5083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5083)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    The concept of faceted classification has its long history and importance in the human civilization. Recently, more and more consumer Web sites adopt the idea of facet analysis to organize and display their products or services. The aim of this article is to review the origin and develpment of faceted classification, as well as its concepts, essence, advantage and limitation. Further, the applications of faceted classification in various domians have been explored.
    Date
    27. 5.2007 22:19:35
  11. Fripp, D.: Using linked data to classify web documents (2010) 0.02
    0.016103508 = product of:
      0.11272455 = sum of:
        0.056362275 = weight(_text_:web in 4172) [ClassicSimilarity], result of:
          0.056362275 = score(doc=4172,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.50479853 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
        0.056362275 = weight(_text_:web in 4172) [ClassicSimilarity], result of:
          0.056362275 = score(doc=4172,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.50479853 = fieldWeight in 4172, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4172)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The purpose of this paper is to find a relationship between traditional faceted classification schemes and semantic web document annotators, particularly in the linked data environment. Design/methodology/approach - A consideration of the conceptual ideas behind faceted classification and linked data architecture is made. Analysis of selected web documents is performed using Calais' Semantic Proxy to support the considerations. Findings - Technical language aside, the principles of both approaches are very similar. Modern classification techniques have the potential to automatically generate metadata to drive more precise information recall by including a semantic layer. Originality/value - Linked data have not been explicitly considered in this context before in the published literature.
    Theme
    Semantic Web
  12. Pocock, H.: Classification schemes : development and survival (1997) 0.02
    0.015955662 = product of:
      0.11168963 = sum of:
        0.055844814 = weight(_text_:world in 762) [ClassicSimilarity], result of:
          0.055844814 = score(doc=762,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.42466965 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
        0.055844814 = weight(_text_:world in 762) [ClassicSimilarity], result of:
          0.055844814 = score(doc=762,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.42466965 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.078125 = fieldNorm(doc=762)
      0.14285715 = coord(2/14)
    
    Abstract
    Discusses the development of classification schemes and their ability to adapt to and accomodate changes in the information world in order to survive. Examines the revision plans for the major classification schemes and the future use of classification search facilities for OPACs
  13. Garcia Marco, F.J.: Contexto y determinantes funcionales de la clasificacion documental (1996) 0.02
    0.0157953 = product of:
      0.11056709 = sum of:
        0.055283546 = weight(_text_:world in 380) [ClassicSimilarity], result of:
          0.055283546 = score(doc=380,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.4204015 = fieldWeight in 380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=380)
        0.055283546 = weight(_text_:world in 380) [ClassicSimilarity], result of:
          0.055283546 = score(doc=380,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.4204015 = fieldWeight in 380, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=380)
      0.14285715 = coord(2/14)
    
    Abstract
    Considers classification in the context of the information retrieval chain, a communication process. Defines classification as an heuristic methodology, which is being improved through scientific methodology. It is also an indexing process, setting each document in a systematic order, in a predictable place and therefore able to be efficiently retrieved. Classification appears to be determined by 4 factors: the structure of the world of documents, a function of the world of knowledge; the classification tools that allow us to codify them; the way in which people create and use classifications; and the features of the information unit
  14. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.02
    0.015554127 = product of:
      0.072585925 = sum of:
        0.028181138 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.028181138 = score(doc=3494,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.028181138 = weight(_text_:web in 3494) [ClassicSimilarity], result of:
          0.028181138 = score(doc=3494,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.25239927 = fieldWeight in 3494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3494)
        0.016223647 = product of:
          0.032447293 = sum of:
            0.032447293 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
              0.032447293 = score(doc=3494,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.2708308 = fieldWeight in 3494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3494)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  15. Qin, J.: Evolving paradigms of knowledge representation and organization : a comparative study of classification, XML/DTD and ontology (2003) 0.02
    0.015525395 = product of:
      0.072451845 = sum of:
        0.031590596 = weight(_text_:world in 2763) [ClassicSimilarity], result of:
          0.031590596 = score(doc=2763,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.24022943 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.031590596 = weight(_text_:world in 2763) [ClassicSimilarity], result of:
          0.031590596 = score(doc=2763,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.24022943 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2763)
        0.009270656 = product of:
          0.018541312 = sum of:
            0.018541312 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.018541312 = score(doc=2763,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.15476047 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    The different points of views an knowledge representation and organization from various research communities reflect underlying philosophies and paradigms in these communities. This paper reviews differences and relations in knowledge representation and organization and generalizes four paradigms-integrative and disintegrative pragmatism and integrative and disintegrative epistemologism. Examples such as classification, XML schemas, and ontologies are compared based an how they specify concepts, build data models, and encode knowledge organization structures. 1. Introduction Knowledge representation (KR) is a term that several research communities use to refer to somewhat different aspects of the same research area. The artificial intelligence (AI) community considers KR as simply "something to do with writing down, in some language or communications medium, descriptions or pictures that correspond in some salient way to the world or a state of the world" (Duce & Ringland, 1988, p. 3). It emphasizes the ways in which knowledge can be encoded in a computer program (Bench-Capon, 1990). For the library and information science (LIS) community, KR is literally the synonym of knowledge organization, i.e., KR is referred to as the process of organizing knowledge into classifications, thesauri, or subject heading lists. KR has another meaning in LIS: it "encompasses every type and method of indexing, abstracting, cataloguing, classification, records management, bibliography and the creation of textual or bibliographic databases for information retrieval" (Anderson, 1996, p. 336). Adding the social dimension to knowledge organization, Hjoerland (1997) states that knowledge is a part of human activities and tied to the division of labor in society, which should be the primary organization of knowledge. Knowledge organization in LIS is secondary or derived, because knowledge is organized in learned institutions and publications. These different points of views an KR suggest that an essential difference in the understanding of KR between both AI and LIS lies in the source of representationwhether KR targets human activities or derivatives (knowledge produced) from human activities. This difference also decides their difference in purpose-in AI KR is mainly computer-application oriented or pragmatic and the result of representation is used to support decisions an human activities, while in LIS KR is conceptually oriented or abstract and the result of representation is used for access to derivatives from human activities.
    Date
    12. 9.2004 17:22:35
  16. Zhonghong, W.; Chaudhry, A.S.; Khoo, C.: Potential and prospects of taxonomies for content organization (2006) 0.01
    0.01484146 = product of:
      0.10389021 = sum of:
        0.051945105 = weight(_text_:wide in 169) [ClassicSimilarity], result of:
          0.051945105 = score(doc=169,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.342674 = fieldWeight in 169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=169)
        0.051945105 = weight(_text_:wide in 169) [ClassicSimilarity], result of:
          0.051945105 = score(doc=169,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.342674 = fieldWeight in 169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=169)
      0.14285715 = coord(2/14)
    
    Abstract
    While taxonomies are being increasingly discussed in published and grey literature, the term taxonomy still seems to be stated quite loosely and obscurely. This paper aims at explaining and clarifying the concept of taxonomy in the context of information organization. To this end, the salient features of taxonomies are identified and their scope, nature, and role are further elaborated based on an extensive literature review. In the meantime, the connection and distinctions between taxonomies and classification schemes and thesauri are also identified, and the rationale that taxonomies are chosen as a viable knowledge organization system used in organization-wide websites to support browsing and aid navigation is clarified.
  17. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.01
    0.014449958 = product of:
      0.06743313 = sum of:
        0.027922407 = weight(_text_:world in 5562) [ClassicSimilarity], result of:
          0.027922407 = score(doc=5562,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 5562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.027922407 = weight(_text_:world in 5562) [ClassicSimilarity], result of:
          0.027922407 = score(doc=5562,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 5562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.01158832 = product of:
          0.02317664 = sum of:
            0.02317664 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.02317664 = score(doc=5562,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  18. Dousa, T.M.: Categories and the architectonics of system in Julius Otto Kaiser's method of systematic indexing (2014) 0.01
    0.014449958 = product of:
      0.06743313 = sum of:
        0.027922407 = weight(_text_:world in 1418) [ClassicSimilarity], result of:
          0.027922407 = score(doc=1418,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.027922407 = weight(_text_:world in 1418) [ClassicSimilarity], result of:
          0.027922407 = score(doc=1418,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 1418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1418)
        0.01158832 = product of:
          0.02317664 = sum of:
            0.02317664 = weight(_text_:22 in 1418) [ClassicSimilarity], result of:
              0.02317664 = score(doc=1418,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.19345059 = fieldWeight in 1418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1418)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Categories, or concepts of high generality representing the most basic kinds of entities in the world, have long been understood to be a fundamental element in the construction of knowledge organization systems (KOSs), particularly faceted ones. Commentators on facet analysis have tended to foreground the role of categories in the structuring of controlled vocabularies and the construction of compound index terms, and the implications of this for subject representation and information retrieval. Less attention has been paid to the variety of ways in which categories can shape the overall architectonic framework of a KOS. This case study explores the range of functions that categories took in structuring various aspects of an early analytico-synthetic KOS, Julius Otto Kaiser's method of Systematic Indexing (SI). Within SI, categories not only functioned as mechanisms to partition an index vocabulary into smaller groupings of terms and as elements in the construction of compound index terms but also served as means of defining the units of indexing, or index items, incorporated into an index; determining the organization of card index files and the articulation of the guide card system serving as a navigational aids thereto; and setting structural constraints to the establishment of cross-references between terms. In all these ways, Kaiser's system of categories contributed to the general systematicity of SI.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. Bosch, M.: Ontologies, different reasoning strategies, different logics, different kinds of knowledge representation : working together (2006) 0.01
    0.013946047 = product of:
      0.09762233 = sum of:
        0.048811164 = weight(_text_:web in 166) [ClassicSimilarity], result of:
          0.048811164 = score(doc=166,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.43716836 = fieldWeight in 166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=166)
        0.048811164 = weight(_text_:web in 166) [ClassicSimilarity], result of:
          0.048811164 = score(doc=166,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.43716836 = fieldWeight in 166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=166)
      0.14285715 = coord(2/14)
    
    Abstract
    The recent experiences in the building, maintenance and reuse of ontologies has shown that the most efficient approach is the collaborative one. However, communication between collaborators such as IT professionals, librarians, web designers and subject matter experts is difficult and time consuming. This is because there are different reasoning strategies, different logics and different kinds of knowledge representation in the applications of Semantic Web. This article intends to be a reference scheme. It uses concise and simple explanations that can be used in common by specialists of different backgrounds working together in an application of Semantic Web.
  20. Jacob, E.K.: ¬The everyday world of work : two approaches to the investigation of classification in context (2001) 0.01
    0.013818008 = product of:
      0.09672605 = sum of:
        0.048363026 = weight(_text_:world in 4494) [ClassicSimilarity], result of:
          0.048363026 = score(doc=4494,freq=6.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.3677747 = fieldWeight in 4494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4494)
        0.048363026 = weight(_text_:world in 4494) [ClassicSimilarity], result of:
          0.048363026 = score(doc=4494,freq=6.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.3677747 = fieldWeight in 4494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4494)
      0.14285715 = coord(2/14)
    
    Abstract
    One major aspect of T.D. Wilson's research has been his insistence on situating the investigation of information behaviour within the context of its occurrence - within the everyday world of work. The significance of this approach is reviewed in light of the notion of embodied cognition that characterises the evolving theoretical episteme in cognitive science research. Embodied cognition employs complex external props such as stigmergic structures and cognitive scaffoldings to reduce the cognitive burden on the individual and to augment human problem-solving activities. The cognitive function of the classification scheme is described as exemplifying both stigmergic structures and cognitive scaffoldings. Two different but complementary approaches to the investigation of situated cognition are presented: cognition-as-scaffolding and cognition-as-infrastructure. Classification-as-scaffolding views the classification scheme as a knowledge storage device supporting and promoting cognitive economy. Classification-as-infrastructure views the classification system as a social convention that, when integrated with technological structures and organisational practices, supports knowledge management work. Both approaches are shown to build upon and extend Wilson's contention that research is most productive when it attends to the social and organisational contexts of cognitive activity by focusing on the everyday world of work.

Years

Languages

  • e 54
  • f 3
  • chi 1
  • d 1
  • i 1
  • sp 1
  • More… Less…

Types

  • a 55
  • el 3
  • m 3
  • s 2
  • More… Less…