Search (73 results, page 2 of 4)

  • × language_ss:"e"
  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  • × type_ss:"m"
  1. Broughton, V.: Essential classification (2004) 0.00
    0.0018783208 = product of:
      0.0037566416 = sum of:
        0.0037566416 = product of:
          0.0075132833 = sum of:
            0.0075132833 = weight(_text_:a in 2824) [ClassicSimilarity], result of:
              0.0075132833 = score(doc=2824,freq=92.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.17280684 = fieldWeight in 2824, product of:
                  9.591663 = tf(freq=92.0), with freq of:
                    92.0 = termFreq=92.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  2. Hart, A.: RDA made simple : a practical guide to the new cataloging rules (2014) 0.00
    0.0018577921 = product of:
      0.0037155843 = sum of:
        0.0037155843 = product of:
          0.0074311686 = sum of:
            0.0074311686 = weight(_text_:a in 2807) [ClassicSimilarity], result of:
              0.0074311686 = score(doc=2807,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1709182 = fieldWeight in 2807, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2807)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Looking for a comprehensive, all-in-one guide to RDA that keeps it simple and provides exactly what you need to know? This book covers planning and training considerations, presents relevant FRBR and FRAD background, and offers practical, step-by-step cataloging advice for a variety of material formats. - Supplies an accessible, up-to-date guide to RDA in a single resource - Covers history and development of the new cataloging code, including the results of the U.S. RDA Test Coordinating Committee Report - Presents the latest information on RDA cataloging for multiple material formats, including print, audiovisual, and digital resources - Explains how RDA's concepts, structure, and vocabulary are based on FRBR (Functional Requirements for Bibliographic Records) and FRAD (Functional Requirements for Authority Data), both of which are reviewed in the book
  3. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.00
    0.001837034 = product of:
      0.003674068 = sum of:
        0.003674068 = product of:
          0.007348136 = sum of:
            0.007348136 = weight(_text_:a in 3346) [ClassicSimilarity], result of:
              0.007348136 = score(doc=3346,freq=22.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.16900843 = fieldWeight in 3346, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  4. McIlwaine, I.C.: ¬The Universal Decimal Classification : a guide to its use (2000) 0.00
    0.0018318077 = product of:
      0.0036636153 = sum of:
        0.0036636153 = product of:
          0.0073272306 = sum of:
            0.0073272306 = weight(_text_:a in 161) [ClassicSimilarity], result of:
              0.0073272306 = score(doc=161,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1685276 = fieldWeight in 161, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book is an extension and total revision of the author's earlier Guide to the use of UDC. The original was written in 1993 and in the intervening years much has happened with the classification. In particular, a much more rigorous approach has been undertaken in revision to ensure that the scheme is able to handle the requirements of a networked world. The book outlines the history and development of the Universal Decimal Classification, provides practical hints on its application and works through all the auxiliary and main tables highlighting aspects that need to be noted in applying the scheme. It also provides guidance on the use of the Master Reference File and discusses the ways in which the classification is used in the 21st century and its suitability as an aid to subject description in tagging metadata and consequently for application on the Internet. It is intended as a source for information about the scheme, for practical usage by classifiers in their daily work and as a guide to the student learning how to apply the classification. It is amply provided with examples to illustrate the many ways in which the scheme can be applied and will be a useful source for a wide range of information workers
  5. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    0.0017651741 = product of:
      0.0035303482 = sum of:
        0.0035303482 = product of:
          0.0070606964 = sum of:
            0.0070606964 = weight(_text_:a in 468) [ClassicSimilarity], result of:
              0.0070606964 = score(doc=468,freq=52.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.16239727 = fieldWeight in 468, product of:
                  7.2111025 = tf(freq=52.0), with freq of:
                    52.0 = termFreq=52.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
  6. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    0.0017474331 = product of:
      0.0034948662 = sum of:
        0.0034948662 = product of:
          0.0069897324 = sum of:
            0.0069897324 = weight(_text_:a in 6119) [ClassicSimilarity], result of:
              0.0069897324 = score(doc=6119,freq=104.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.16076508 = fieldWeight in 6119, product of:
                  10.198039 = tf(freq=104.0), with freq of:
                    104.0 = termFreq=104.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.178-179 (M.-Y. Kan): "In their latest book, Chowdhury and Chowdhury have written an introductory text an digital libraries, primarily targeting "students researching digital libraries as part of information and library science, as well as computer science, courses" (p. xiv). It is an ambitious work that surveys many of the broad topics in digital libraries (DL) while highlighting completed and ongoing DL research in many parts of the world. With the revamping of Library and Information Science (LIS) Curriculums to focus an information technology, many LIS schools are now teaching DL topics either as an independent course or as part of an existing one. Instructors of these courses have in many cases used supplementary texts and compeed readers from journals and conference materials, possibly because they feel that a suitable textbook has yet to be written. A solid, principal textbook for digital libraries is sorely needed to provide a critical, evaluative synthesis of DL literature. It is with this in mind that I believe Introduction to Digital Libraries was written. An introductory text an any Cross-disciplinary topic is bound to have conflicting limitations and expectations from its adherents who come from different backgrounds. This is the rase in the development of DL Curriculum, in which both LIS and computer science schools are actively involved. Compiling a useful secondary source in such Cross-disciplinary areas is challenging; it requires that jargon from each contributing field be carefully explained and respected, while providing thought-provoking material to broaden student perspectives. In my view, the book's breadth certainly encompasses the whole of what an introduction to DL needs, but it is hampered by a lack of focus from catering to such disparate needs. For example, LIS students will need to know which key aspects differentiate digital library metadata from traditional metadata while computer science students will need to learn the basics of vector spare and probabilistic information retrieval. However, the text does not give enough detail an either subject and thus even introductory students will need to go beyond the book and consult primary sources. In this respect, the book's 307 pages of content are too short to do justice to such a broad field of study.
    This book covers all of the primary areas in the DL Curriculum as suggested by T. Saracevic and M. Dalbello's (2001) and A. Spink and C. Cool's (1999) D-Lib articles an DL education. In fact, the book's coverage is quite broad; it includes a Superset of recommended topics, offering a chapter an professional issues (recommended in Spink and Cool) as well as three chapters devoted to DL research. The book comes with a comprehensive list of references and an index, allowing readers to easily locate a specific topic or research project of interest. Each chapter also begins with a short outline of the chapter. As an additional plus, the book is quite heavily Cross-referenced, allowing easy navigation across topics. The only drawback with regard to supplementary materials is that it Lacks a glossary that world be a helpful reference to students needing a reference guide to DL terminology. The book's organization is well thought out and each chapter stands independently of the others, facilitating instruction by parts. While not officially delineated into three parts, the book's fifteen chapters are logically organized as such. Chapters 2 and 3 form the first part, which surveys various DLs and DL research initiatives. The second and core part of the book examines the workings of a DL along various dimensions, from its design to its eventual implementation and deployment. The third part brings together extended topics that relate to a deployed DL: its preservation, evaluation, and relationship to the larger social content. Chapter 1 defines digital libraries and discusses the scope of the materials covered in the book. The authors posit that the meaning of digital library is best explained by its sample characteristics rather than by definition, noting that it has largely been shaped by the melding of the research and information professions. This reveals two primary facets of the DL: an "emphasis an digital content" coming from an engineering and computer science perspective as well as an "emphasis an services" coming from library and information professionals (pp. 4-5). The book's organization mirrors this dichotomy, focusing an the core aspects of content in the earlier chapters and retuming to the service perspective in later chapters.
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  7. Theory of subject analysis : A sourcebook (1985) 0.00
    0.0016959244 = product of:
      0.0033918489 = sum of:
        0.0033918489 = product of:
          0.0067836978 = sum of:
            0.0067836978 = weight(_text_:a in 3622) [ClassicSimilarity], result of:
              0.0067836978 = score(doc=3622,freq=48.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15602624 = fieldWeight in 3622, product of:
                  6.928203 = tf(freq=48.0), with freq of:
                    48.0 = termFreq=48.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3622)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Definition and Scope For the purpose of this reader, subject analysis is understood to encompass vocabulary structuring and subject indexing. Vocabulary structuring refers to the constructing of tools, such as classifications, subject heading lists, and thesauri, designed to facilitate the organization and retrieval of information. These tools, though called by different names, are similar in that they structure or control the basic vocabulary of a subject index language by 1) stipulating terms that may be used in the classing or indexing of documents and 2) displaying semantic relationships, such as hierarchy and synonymy, that obtain between these terms. They differ in the kinds of terms and relationships they recognize and in the manner in which these are displayed. Subject indexing refers to the application of a vocabulary, which may be more or less well structured, to indicate the content or aboutness of documents. Traditionally subject indexing limits its domain to only certain types of documents, such as passages within books (back-of-book indexing) or periodical articles, and the expression it uses to only certain types of strings, for example, descriptors or index terms as opposed to subject headings or class numbers. However, in a generalized and more modern sense, subject indexing refers to the indication of the theme or topic of any document, indeed any retrieval artifact, by any meaningful string of alphanumeric characters. The value of construing the meaning of subject analysis broadly is threefold: it permits comparing a variety of approaches to subject analysis; it permits generalizing about these approaches at a relatively high descriptive level, so that principles and objectives are shown in relief; and, most importantly perhaps, it permits a unified view of the traditional and information scientific approaches to subject analysis.
    Criteria for Selection In selecting the writings to be included in this reader, we have followed the criteria listed below: 1. Theoretical emphasis. Our focus is an theoretical and philosophical aspects rather than practical or technical considerations. In a number of cases, where several authors have written an the same subject or idea or expressed similar thoughts, the originator of the idea, if this could be determined, was selected. 2. Significance and impact. Our most important criterion is the significance of a particular piece or the contribution that it has made in the field of subject analysis. The impact of the ideas or concepts an subsequent practice in subject analysis has also been considered. 3. Perspicuity. Where multiple choices were available an a particular topic or area, our tendency was to exclude the writings that are obscure or highly technical and would require a high degree of tech nical sophistication an the part of the reader. Comprehensibility and clarity of style were often our guide. Based an the criteria stated above, the following types of writings have generally been excluded: review articles, the how-to-do-it type of writings, and textbook materials. In a way, it would probably be easier to defend the writings that have been included than to justify the exclusions. In a small volume containing writings chosen from a vast amount of available material, it is virtually impossible to arrive at a collection that will satisfy every reader. Each person has his or her own preferences or criteria. Inevitably, personal bias comes into play in assembling such a reader. At least, we hope that in this case the collective bias of three individuals rather than one has helped to provide a certain degree of balance. A number of writings originally selected for inclusion were omitted because of space limitation or failure to secure permission to reprint.
    Content
    Eine exzellente (und durch die Herausgeber kommentierte) Zusammenstellung und Wiedergabe folgender Originalbeiträge: CUTTER, C.A.: Subjects; DEWEY, M.: Decimal classification and relativ index: introduction; HOPWOOD, H.V.: Dewey expanded; HULME, E.W.: Principles of book classification; KAISER, J.O.: Systematic indexing; MARTEL, C.: Classification: a brief conspectus of present day library practice; BLISS, H.E.: A bibliographic classification: principles and definitions; RANGANATHAN, S.R.: Facet analysis: fundamental categories; PETTEE, J.: The subject approach to books and the development of the dictionary catalog; PETTEE, J.: Fundamental principles of the dictionary catalog; PETTEE, J.: Public libraries and libraries as purveyors of information; HAYKIN, D.J.: Subject headings: fundamental concepts; TAUBE, M.: Functional approach to bibliographic organization: a critique and a proposal; VICKERY, B.C.: Systematic subject indexing; FEIBLEMAN, J.K.: Theory of integrative levels; GARFIELD, E.: Citation indexes for science; CRG: The need for a faceted classification as the basis of all methods of information retrieval; LUHN, H.P.: Keyword-in-context index for technical literature; COATES, E.J.: Significance and term relationship in compound headings; FARRADANE, J.E.L.: Fundamental fallacies and new needs in classification; FOSKETT, D.J.: Classification and integrative levels; CLEVERDON, C.W. u. J. MILLS: The testing of index language devices; MOOERS, C.N.: The indexing language of an information retrieval system; NEEDHAM, R.M. u. K. SPARCK JONES: Keywords and clumps; ROLLING, L.: The role of graphic display of concept relationships in indexing and retrieval vocabularies; BORKO, H.: Research in computer based classification systems; WILSON, P.: Subjects and the sense of position; LANCASTER, F.W.: Evaluating the performance of a large computerized information system; SALTON, G.: Automatic processing of foreign language documents; FAIRTHORNE, R.A.: Temporal structure in bibliographic classification; AUSTIN, D. u. J.A. DIGGER: PRECIS: The Preserved Context Index System; FUGMANN, R.: The complementarity of natural and indexing languages
  8. Broughton, V.: Essential Library of Congress Subject Headings (2009) 0.00
    0.0016959244 = product of:
      0.0033918489 = sum of:
        0.0033918489 = product of:
          0.0067836978 = sum of:
            0.0067836978 = weight(_text_:a in 395) [ClassicSimilarity], result of:
              0.0067836978 = score(doc=395,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15602624 = fieldWeight in 395, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=395)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    LCSH are increasingly seen as 'the' English language controlled vocabulary, despite their lack of a theoretical foundation, and their evident US bias. In mapping exercises between national subject heading lists, and in exercises in digital resource organization and management, LCSH are often chosen because of the lack of any other widely accepted English language standard for subject cataloguing. It is therefore important that the basic nature of LCSH, their advantages, and their limitations, are well understood both by LIS practitioners and those in the wider information community. Information professionals who attended library school before 1995 - and many more recent library school graduates - are unlikely to have had a formal introduction to LCSH. Paraprofessionals who undertake cataloguing are similarly unlikely to have enjoyed an induction to the broad principles of LCSH. There is currently no compact guide to LCSH written from a UK viewpoint, and this eminently practical text fills that gap. It features topics including: background and history of LCSH; subject heading lists; structure and display in LCSH; form of entry; application of LCSH; document analysis; main headings; topical, geographical and free-floating sub-divisions; building compound headings; name headings; headings for literature, art, music, history and law; and, LCSH in the online environment. There is a strong emphasis throughout on worked examples and practical exercises in the application of the scheme, and a full glossary of terms is supplied. No prior knowledge or experience of subject cataloguing is assumed. This is an indispensable guide to LCSH for practitioners and students alike from a well-known and popular author.
  9. Kowalski, G.: Information retrieval systems : theory and implemetation (1997) 0.00
    0.0016788795 = product of:
      0.003357759 = sum of:
        0.003357759 = product of:
          0.006715518 = sum of:
            0.006715518 = weight(_text_:a in 1891) [ClassicSimilarity], result of:
              0.006715518 = score(doc=1891,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1544581 = fieldWeight in 1891, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1891)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval systems: Theory and implementation provides a theoretical and practical explanation of the latest advancemenets in information retrieval and their application to existing systems. It takes a system approach, discussing als aspects of an Information Retrieval System. The importance of the Internet and its associated hypertext linked structure are put into perspective as a new type of information retrieval data structure
  10. Scott, M.L.: Dewey Decimal Classification, 21st edition : a study manual and number building guide (1998) 0.00
    0.0016788795 = product of:
      0.003357759 = sum of:
        0.003357759 = product of:
          0.006715518 = sum of:
            0.006715518 = weight(_text_:a in 1454) [ClassicSimilarity], result of:
              0.006715518 = score(doc=1454,freq=24.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1544581 = fieldWeight in 1454, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1454)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    This work is a comprehensive guide to Edition 21 of the Dewey Decimal Classification (DDC 21). The previous edition was edited by John Phillip Comaromi, who also was the editor of DDC 20 and thus was able to impart in its pages information about the inner workings of the Decimal Classification Editorial Policy Committee, which guides the Classification's development. The manual begins with a brief history of the development of Dewey Decimal Classification (DDC) up to this edition and its impact internationally. It continues on to a review of the general structure of DDC and the 21st edition in particular, with emphasis on the framework ("Hierarchical Order," "Centered Entries") that aids the classifier in its use. An extensive part of this manual is an in-depth review of how DDC is updated with each edition, such as reductions and expansions, and detailed lists of such changes in each table and class. Each citation of a change indicates the previous location of the topic, usually in parentheses but also in textual explanations ("moved from 248.463"). A brief discussion of the topic moved or added provides substance to what otherwise would be lists of numbers. Where the changes are so dramatic that a new class or division structure has been developed, Comparative and Equivalence Tables are provided in volume 1 of DDC 21 (such as Life sciences in 560-590); any such list in this manual would only be redundant. In these cases, the only references to changes in this work are those topics that were moved from other classes. Besides these citations of changes, each class is introduced with a brief background discussion about its development or structure or both to familiarize the user with it. A new aspect in this edition of the DDC study manual is that it is combined with Marty Bloomberg and Hans Weber's An Introduction to Classification and Number Building in Dewey (Libraries Unlimited, 1976) to provide a complete reference for the application of DDC. Detailed examples of number building for each class will guide the classifier through the process that results in classifications for particular works within that class. In addition, at the end of each chapter, lists of book summaries are given as exercises in number analysis, with Library of Congress-assigned classifications to provide benchmarks. The last chapter covers book, or author, numbers, which-combined with the classification and often the date-provide unique call numbers for circulation and shelf arrangement. Guidelines in the application of Cutter tables and Library of Congress author numbers complete this comprehensive reference to the use of DDC 21. As with all such works, this was a tremendous undertaking, which coincided with the author completing a new edition of Conversion Tables: LC-Dewey, Dewey-LC (Libraries Unlimited, forthcoming). Helping hands are always welcome in our human existence, and this book is no exception. Grateful thanks are extended to Jane Riddle, at the NASA Goddard Space Flight Center Library, and to Darryl Hines, at SANAD Support Technologies, Inc., for their kind assistance in the completion of this study manual.
  11. Bawden, D.; Robinson, L.: ¬An introduction to information science (2012) 0.00
    0.0016788795 = product of:
      0.003357759 = sum of:
        0.003357759 = product of:
          0.006715518 = sum of:
            0.006715518 = weight(_text_:a in 4966) [ClassicSimilarity], result of:
              0.006715518 = score(doc=4966,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1544581 = fieldWeight in 4966, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4966)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Landmark textbook taking a whole subject approach to information science as a discipline. The authors' expert narratives guides you through each of the essential components of information science, offering a concise introduction an expertly chosen readings and resources. This is the definitve science textbook for students of this subject, and of information and knowledge management, librarianship, archives and records management worldwide.
  12. Brown, A.G.; Langridge, D.W.; Mills, J.: Introduction to subject indexing : a programmed text (1976) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 1063) [ClassicSimilarity], result of:
              0.006646639 = score(doc=1063,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 1063, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1063)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Pollitt, A.S.: Information storage and retrieval systems : origin, development and applications (1979) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 5225) [ClassicSimilarity], result of:
              0.006646639 = score(doc=5225,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5225)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält folgende Kapitel: (1) Recording knowledge; (2) Classifying and indexing; (3) Searching; (4) Building and searching a database; (5) Front-end systems; (6) From viewdata to hypermedia; (7) Evaluation
  14. Walker, G.; Janes, J.: Online retrieval : a dialogue of theory and practice (1999) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 1875) [ClassicSimilarity], result of:
              0.006646639 = score(doc=1875,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1875)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Chan, L.M.: Dewey Decimal Classification : a practical guide (1996) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 2612) [ClassicSimilarity], result of:
              0.006646639 = score(doc=2612,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 2612, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Kumar, K.: Theory of classification (1985) 0.00
    0.0016616598 = product of:
      0.0033233196 = sum of:
        0.0033233196 = product of:
          0.006646639 = sum of:
            0.006646639 = weight(_text_:a in 2069) [ClassicSimilarity], result of:
              0.006646639 = score(doc=2069,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15287387 = fieldWeight in 2069, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book provides a coherent account of the theory of classification. It discusses the contributions made by theoreticians like E.C. Richardson, J.B. Brown, W. Hulme, W.C. Berwick Sayers, H.E. Bliss and S.R. Ranganathan. However, the theory put forward by S.R. Ranganathan predominates the whole book because his contribution is far more than anybody else's. Five major schemes - DDC, UDC, LCC, CC, and BC - have also been discussed. Library classification is a specialized area of study. In recent years, library classification has become a vast and complicated field of study using highly technical terminology. A special attempt has been made to provide descriptions as simple and direct as could be possible. To illustrate the theory of classification, large number of examples have been given from all major schemes so that an average student ould also grasp the concepts easily. This book has been especially written to meet the requirements of students, preparing for their library science, documentation, information science diplomas and degrees.
  17. Aitchison, J.; Gilchrist, A.; Bawden, D.: Thesaurus construction and use : a practical manual (1997) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 255) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=255,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Aitchison, J.; Gilchrist, A.; Bawden, D.: Thesaurus construction and use : a practical manual (2000) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 130) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=130,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 130, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Buchanan, B.: Theory of library classification (1979) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 641) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=641,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 641, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=641)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Classification: definition and uses - The relationships between classes - Enumerative and faceted schemes - Decisions - The construction of a faceted scheme: I - The construction of a faceted scheme: II - Notation: I - Notation: II - Notation: III - The alphabetical subject index - General classification schemes - Objections to systematic order - Automatic classification
  20. Smiraglia, R.P.: ¬The elements of knowledge organization (2014) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 1513) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=1513,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 1513, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1513)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Elements of Knowledge Organization is a unique and original work introducing the fundamental concepts related to the field of Knowledge Organization (KO). There is no other book like it currently available. The author begins the book with a comprehensive discussion of "knowledge" and its associated theories. He then presents a thorough discussion of the philosophical underpinnings of knowledge organization. The author walks the reader through the Knowledge Organization domain expanding the core topics of ontologies, taxonomies, classification, metadata, thesauri and domain analysis. The author also presents the compelling challenges associated with the organization of knowledge. This is the first book focused on the concepts and theories associated with KO domain. Prior to this book, individuals wishing to study Knowledge Organization in its broadest sense would generally collocate their own resources, navigating the various methods and models and perhaps inadvertently excluding relevant materials. This text cohesively links key and related KO material and provides a deeper understanding of the domain in its broadest sense and with enough detail to truly investigate its many facets. This book will be useful to both graduate and undergraduate students in the computer science and information science domains both as a text and as a reference book. It will also be valuable to researchers and practitioners in the industry who are working on website development, database administration, data mining, data warehousing and data for search engines. The book is also beneficial to anyone interested in the concepts and theories associated with the organization of knowledge. Dr. Richard P. Smiraglia is a world-renowned author who is well published in the Knowledge Organization domain. Dr. Smiraglia is editor-in-chief of the journal Knowledge Organization, published by Ergon-Verlag of Würzburg. He is a professor and member of the Information Organization Research Group at the School of Information Studies at University of Wisconsin Milwaukee.

Subjects

Classifications