Search (52 results, page 3 of 3)

  • × language_ss:"e"
  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Subject and information analysis (1985) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 793) [ClassicSimilarity], result of:
          0.023478512 = score(doc=793,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
      0.33333334 = coord(1/3)
    
    Series
    Books in library and information science; 47
  2. Frants, V.I.; Voiskunskii, V.G.; Shapiro, J.: Automated information retrieval : theory and methods (1997) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 1790) [ClassicSimilarity], result of:
          0.023478512 = score(doc=1790,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 1790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1790)
      0.33333334 = coord(1/3)
    
    Abstract
    The emergence of information retrieval systems as a means of satisfying information needs has resulted in a large number of theoretical and practical ideas being introduced. These advancements provide the foundation for the theory of IR systems detailed in this book. Attention is also focused on the other areas of information science and how these differing theories interact and rely on each other. The book details algorithms in each process in the system, including those that are radically new in the retrieval process and those that are adaptable to the individual. New apporaches to evaluating information retrieval studying their performance are included
  3. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.0073785847 = product of:
      0.022135753 = sum of:
        0.022135753 = weight(_text_:science in 3346) [ClassicSimilarity], result of:
          0.022135753 = score(doc=3346,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.16463245 = fieldWeight in 3346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.33333334 = coord(1/3)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  4. Smiraglia, R.P.: ¬The elements of knowledge organization (2014) 0.01
    0.0073785847 = product of:
      0.022135753 = sum of:
        0.022135753 = weight(_text_:science in 1513) [ClassicSimilarity], result of:
          0.022135753 = score(doc=1513,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.16463245 = fieldWeight in 1513, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1513)
      0.33333334 = coord(1/3)
    
    Abstract
    The Elements of Knowledge Organization is a unique and original work introducing the fundamental concepts related to the field of Knowledge Organization (KO). There is no other book like it currently available. The author begins the book with a comprehensive discussion of "knowledge" and its associated theories. He then presents a thorough discussion of the philosophical underpinnings of knowledge organization. The author walks the reader through the Knowledge Organization domain expanding the core topics of ontologies, taxonomies, classification, metadata, thesauri and domain analysis. The author also presents the compelling challenges associated with the organization of knowledge. This is the first book focused on the concepts and theories associated with KO domain. Prior to this book, individuals wishing to study Knowledge Organization in its broadest sense would generally collocate their own resources, navigating the various methods and models and perhaps inadvertently excluding relevant materials. This text cohesively links key and related KO material and provides a deeper understanding of the domain in its broadest sense and with enough detail to truly investigate its many facets. This book will be useful to both graduate and undergraduate students in the computer science and information science domains both as a text and as a reference book. It will also be valuable to researchers and practitioners in the industry who are working on website development, database administration, data mining, data warehousing and data for search engines. The book is also beneficial to anyone interested in the concepts and theories associated with the organization of knowledge. Dr. Richard P. Smiraglia is a world-renowned author who is well published in the Knowledge Organization domain. Dr. Smiraglia is editor-in-chief of the journal Knowledge Organization, published by Ergon-Verlag of Würzburg. He is a professor and member of the Information Organization Research Group at the School of Information Studies at University of Wisconsin Milwaukee.
  5. Chowdhury, G.G.: Introduction to modern information retrieval (1999) 0.01
    0.006915737 = product of:
      0.02074721 = sum of:
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
              0.04149442 = score(doc=4902,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 4902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4902)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Enthält die Kapitel: 1. Basic concepts of information retrieval systems, 2. Database technology, 3. Bibliographic formats, 4. Subject analysis and representation, 5. Automatic indexing and file organization, 6. Vocabulary control, 7. Abstracts and abstracting, 8. Searching and retrieval, 9. Users of information retrieval, 10. Evaluation of information retrieval systems, 11. Evaluation experiments, 12. Online information retrieval, 13. CD-ROM information retrieval, 14. Trends in CD-ROM and online information retrieval, 15. Multimedia information retrieval, 16. Hypertext and hypermedia systems, 17. Intelligent information retrieval, 18. Natural language processing and information retrieval, 19. Natural language interfaces, 20. Natural language text processing and retrieval systems, 21. Problems and prospects of natural language processing systems, 22. The Internet and information retrieval, 23. Trends in information retrieval.
  6. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.01
    0.0063455338 = product of:
      0.0190366 = sum of:
        0.0190366 = product of:
          0.0380732 = sum of:
            0.0380732 = weight(_text_:index in 2050) [ClassicSimilarity], result of:
              0.0380732 = score(doc=2050,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.1706939 = fieldWeight in 2050, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  7. Booth, P.F.: Indexing : the manual of good practice (2001) 0.01
    0.0062173284 = product of:
      0.018651985 = sum of:
        0.018651985 = product of:
          0.03730397 = sum of:
            0.03730397 = weight(_text_:index in 1968) [ClassicSimilarity], result of:
              0.03730397 = score(doc=1968,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.1672452 = fieldWeight in 1968, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1968)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
    Der Zugang zum Informationsspeicher ist auch von verwandten Begriffen her zu gewährleisten, denn der Suchende lässt sich gern mit seiner Fragestellung zu allgemeineren und vor allem zu spezifischeren Begriffen leiten. Verweisungen der Art "siehe auch" dienen diesem Zweck. Der Zugang ist auch von unterschiedlichen, aber bedeutungsgleichen Ausdrücken mithilfe einer Verweisung von der Art "siehe" zu gewährleisten, denn ein Fragesteller könnte sich mit einem von diesen Synonymen auf die Suche begeben haben und würde dann nicht fündig werden. Auch wird Vieles, wofür ein Suchender sein Schlagwort parat hat, in einem Text nur in wortreicher Umschreibung und paraphrasiert angetroffen ("Terms that may not appear in the text but are likely to be sought by index users"), d.h. praktisch unauffindbar in einer derartig mannigfaltigen Ausdrucksweise. All dies sollte lexikalisch ausgedrückt werden, und zwar in geläufiger Terminologie, denn in dieser Form erfolgt auch die Fragestellung. Hier wird die Grenze zwischen "concept indexing" gegenüber dem bloßen "word indexing" gezogen, welch letzteres sich mit der Präsentation von nicht interpretierten Textwörtern begnügt. Nicht nur ist eine solche Grenze weit verbreitet unbekannt, ihre Existenz wird zuweilen sogar bestritten, obwohl doch ein Wort meistens viele Begriffe ausdrückt und obwohl ein Begriff meistens durch viele verschiedene Wörter und Sätze ausgedrückt wird. Ein Autor kann und muss sich in seinen Texten oft mit Andeutungen begnügen, weil ein Leser oder Zuhörer das Gemeinte schon aus dem Zusammenhang erkennen kann und nicht mit übergroßer Deutlichkeit (spoon feeding) belästigt sein will, was als Unterstellung von Unkenntnis empfunden würde. Für das Retrieval hingegen muss das Gemeinte explizit ausgedrückt werden. In diesem Buch wird deutlich gemacht, was alles an außertextlichem und Hintergrund-Wissen für ein gutes Indexierungsergebnis aufgeboten werden muss, dies auf der Grundlage von sachverständiger und sorgfältiger Interpretation ("The indexer must understand the meaning of a text"). All dies lässt gutes Indexieren nicht nur als professionelle Dienstleistung erscheinen, sondern auch als Kunst. Als Grundlage für all diese Schritte wird ein Thesaurus empfohlen, mit einem gut strukturierten Netzwerk von verwandtschaftlichen Beziehungen und angepasst an den jeweiligen Buchtext. Aber nur selten wird man auf bereits andernorts vorhandene Thesauri zurückgreifen können. Hier wäre ein Hinweis auf einschlägige Literatur zur Thesaurus-Konstruktion nützlich gewesen.
  8. Hedden, H.: ¬The accidental taxonomist (2012) 0.01
    0.0052174474 = product of:
      0.015652342 = sum of:
        0.015652342 = weight(_text_:science in 2915) [ClassicSimilarity], result of:
          0.015652342 = score(doc=2915,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.11641272 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
      0.33333334 = coord(1/3)
    
    Abstract
    "Clearly details the conceptual and practical notions of controlled vocabularies. . provides a crash course for newcomers and offers experienced practitioners a common frame of reference. A valuable book." - Christine Connors, TriviumRLG LLC The Accidental Taxonomist is the most comprehensive guide available to the art and science of building information taxonomies. Heather Hedden-one of today's leading writers, instructors, and consultants on indexing and taxonomy topics-walks readers through the process, displaying her trademark ability to present highly technical information in straightforward, comprehensible English. Drawing on numerous real-world examples, Hedden explains how to create terms and relationships, select taxonomy management software, design taxonomies for human versus automated indexing, manage enterprise taxonomy projects, and adapt taxonomies to various user interfaces. The result is a practical and essential guide for information professionals who need to effectively create or manage taxonomies, controlled vocabularies, and thesauri. "A wealth of descriptive reference content is balanced with expert guidance. . Open The Accidental Taxonomist to begin the learning process or to refresh your understanding of the depth and breadth of this demanding discipline." - Lynda Moulton, Principal Consultant, LWM Technology Services "From the novice taxonomist to the experienced professional, all will find helpful, practical advice in The Accidental Taxonomist." - Trish Yancey, TCOO, Synaptica, LLC "This book squarely addresses the growing demand for and interest in taxonomy. ...Hedden brings a variety of background experience, including not only taxonomy construction but also abstracting and content categorization and creating back-of-the-book indexes. These experiences serve her well by building a broad perspective on the similarities as well as real differences between often overlapping types of work." - Marjorie M. K. Hlava, President and Chairman, Access Innovations, Inc., and Chair, SLA Taxonomy Division
  9. Chu, H.: Information representation and retrieval in the digital age (2010) 0.00
    0.0036892924 = product of:
      0.011067877 = sum of:
        0.011067877 = weight(_text_:science in 92) [ClassicSimilarity], result of:
          0.011067877 = score(doc=92,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.08231623 = fieldWeight in 92, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Weitere Rez. in: Rez. in: nfd 55(2004) H.4, S.252 (D. Lewandowski):"Die Zahl der Bücher zum Thema Information Retrieval ist nicht gering, auch in deutscher Sprache liegen einige Titel vor. Trotzdem soll ein neues (englischsprachiges) Buch zu diesem Thema hier besprochen werden. Dieses zeichnet sich durch eine Kürze (nur etwa 230 Seiten Text) und seine gute Verständlichkeit aus und richtet sich damit bevorzugt an Studenten in den ersten Semestern. Heting Chu unterrichtet seit 1994 an Palmer School of Library and Information Science der Long Island University New York. Dass die Autorin viel Erfahrung in der Vermittlung des Stoffs in ihren Information-Retrieval-Veranstaltungen sammeln konnte, merkt man dem Buch deutlich an. Es ist einer klaren und verständlichen Sprache geschrieben und führt in die Grundlagen der Wissensrepräsentation und des Information Retrieval ein. Das Lehrbuch behandelt diese Themen als Gesamtkomplex und geht damit über den Themenbereich ähnlicher Bücher hinaus, die sich in der Regel auf das Retrieval beschränken. Das Buch ist in zwölf Kapitel gegliedert, wobei das erste Kapitel eine Übersicht über die zu behandelnden Themen gibt und den Leser auf einfache Weise in die Grundbegriffe und die Geschichte des IRR einführt. Neben einer kurzen chronologischen Darstellung der Entwicklung der IRR-Systeme werden auch vier Pioniere des Gebiets gewürdigt: Mortimer Taube, Hans Peter Luhn, Calvin N. Mooers und Gerard Salton. Dies verleiht dem von Studenten doch manchmal als trocken empfundenen Stoff eine menschliche Dimension. Das zweite und dritte Kapitel widmen sich der Wissensrepräsentation, wobei zuerst die grundlegenden Ansätze wie Indexierung, Klassifikation und Abstracting besprochen werden. Darauf folgt die Behandlung von Wissensrepräsentation mittels Metadaten, wobei v.a. neuere Ansätze wie Dublin Core und RDF behandelt werden. Weitere Unterkapitel widmen sich der Repräsentation von Volltexten und von Multimedia-Informationen. Die Stellung der Sprache im IRR wird in einem eigenen Kapitel behandelt. Dabei werden in knapper Form verschiedene Formen des kontrollierten Vokabulars und die wesentlichen Unterscheidungsmerkmale zur natürlichen Sprache erläutert. Die Eignung der beiden Repräsentationsmöglichkeiten für unterschiedliche IRR-Zwecke wird unter verschiedenen Aspekten diskutiert.
  10. Batley, S.: Classification in theory and practice (2005) 0.00
    0.0036892924 = product of:
      0.011067877 = sum of:
        0.011067877 = weight(_text_:science in 1170) [ClassicSimilarity], result of:
          0.011067877 = score(doc=1170,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.08231623 = fieldWeight in 1170, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.015625 = fieldNorm(doc=1170)
      0.33333334 = coord(1/3)
    
    Abstract
    This book examines a core topic in traditional librarianship: classification. Classification has often been treated as a sub-set of cataloguing and indexing with relatively few basic textbooks concentrating solely an the theory and practice of classifying resources. This book attempts to redress the balance somewhat. The aim is to demystify a complex subject, by providing a sound theoretical underpinning, together with practical advice and promotion of practical skills. The text is arranged into five chapters: Chapter 1: Classification in theory and practice. This chapter explores theories of classification in broad terms and then focuses an the basic principles of library classification, introducing readers to technical terminology and different types of classification scheme. The next two chapters examine individual classification schemes in depth. Each scheme is explained using frequent examples to illustrate basic features. Working through the exercises provided should be enjoyable and will enable readers to gain practical skills in using the three most widely used general library classification schemes: Dewey Decimal Classification, Library of Congress Classification and Universal Decimal Classification. Chapter 2: Classification schemes for general collections. Dewey Decimal and Library of Congress classifications are the most useful and popular schemes for use in general libraries. The background, coverage and structure of each scheme are examined in detail in this chapter. Features of the schemes and their application are illustrated with examples. Chapter 3: Classification schemes for specialist collections. Dewey Decimal and Library of Congress may not provide sufficient depth of classification for specialist collections. In this chapter, classification schemes that cater to specialist needs are examined. Universal Decimal Classification is superficially very much like Dewey Decimal, but possesses features that make it a good choice for specialist libraries or special collections within general libraries. It is recognised that general schemes, no matter how deep their coverage, may not meet the classification needs of some collections. An answer may be to create a special classification scheme and this process is examined in detail here. Chapter 4: Classifying electronic resources. Classification has been reborn in recent years with an increasing need to organise digital information resources. A lot of work in this area has been conducted within the computer science discipline, but uses basic principles of classification and thesaurus construction. This chapter takes a broad view of theoretical and practical issues involved in creating classifications for digital resources by examining subject trees, taxonomies and ontologies. Chapter 5: Summary. This chapter provides a brief overview of concepts explored in depth in previous chapters. Development of practical skills is emphasised throughout the text. It is only through using classification schemes that a deep understanding of their structure and unique features can be gained. Although all the major schemes covered in the text are available an the Web, it is recommended that hard-copy versions are used by those wishing to become acquainted with their overall structure. Recommended readings are supplied at the end of each chapter and provide useful sources of additional information and detail. Classification demands precision and the application of analytical skills, working carefully through the examples and the practical exercises should help readers to improve these faculties. Anyone who enjoys cryptic crosswords should recognise a parallel: classification often involves taking the meaning of something apart and then reassembling it in a different way.
    Footnote
    The heart of the book lies in its exceptionally clear and well illustrated explanation of each of the classification schemes. These are presented comprehensively, but also in gratifying detail, down to the meaning of the various enigmatic notes and notations, such as "config" or "class elsewhere" notes, each simply explained, as if a teacher were standing over your shoulder leading you through it. Such attention at such a fine level may seem superfluous or obvious to a seasoned practitioner, but it is in dealing with such enigmatic details that we find students getting discouraged and confused. That is why I think this would be an excellent text, especially as a book to hold in one hand and the schedules themselves in the other. While the examples throughout and the practical exercises at the end of each chapter are slanted towards British topics, they are aptly Chosen and should present no problem of understanding to a student anywhere. As mentioned, this is an unabashedly practical book, focusing an classification as it has been and is presently applied in libraries for maintaining a "useful book order." It aims to develop those skills that would allow a student to learn how it is done from a procedural rather than a critical perspective. At times, though, one wishes for a bit more of a critical approach - one that would help a student puzzle through some of the ambiguities and issues that the practice of classification in an increasingly global rather than local environment entails. While there is something to be said for a strong foundation in existing practice (to understand from whence it all came), the author essentially accepts the status quo, and ventures almost timidly into any critique of the content and practice of existing classification schemes. This lack of a critical analysis manifests itself in several ways: - The content of the classification schemes as described in this book is treated as fundamentally "correct" or at least "given." This is not to say the author doesn't recognize anomalies and shortcomings, but that her approach is to work with what is there. Where there are logical flaws in the knowledge representation structures, the author takes the approach that there are always tradeoffs, and one must simply do the best one can. This is certainly true for most people working in libraries where the choice of scheme is not controlled by the classifier, and it is a wonderful skill indeed to be able to organize creatively and carefully despite imperfect systems. The approach is less convincing, however, when it is also applied to emerging or newly developed schemes, such as those proposed for organizing electronic resources. Here, the author could have been a bit braver in at least encouraging less normative approaches. - There is also a lingering notion that classification is a precise science. For example the author states (p. 13): Hospitality is the ability to accommodate new topics and concepts in their correct place in the schedules ... Perfect hospitality world mean that every new subject could be accommodated in the most appropriate place in the schedules. In practice, schemes do manage to fit new subjects in, but not necessarily in their most appropriate place. It world have been helpful to acknowledge that for many complex subjects there is no one appropriate place. The author touches an this dilemma, but in passing, and not usually when she is providing practical pointers.
  11. Broughton, V.: Essential classification (2004) 0.00
    0.0035895756 = product of:
      0.0107687265 = sum of:
        0.0107687265 = product of:
          0.021537453 = sum of:
            0.021537453 = weight(_text_:index in 2824) [ClassicSimilarity], result of:
              0.021537453 = score(doc=2824,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.09655905 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  12. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0026087237 = product of:
      0.007826171 = sum of:
        0.007826171 = weight(_text_:science in 2924) [ClassicSimilarity], result of:
          0.007826171 = score(doc=2924,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.05820636 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.33333334 = coord(1/3)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.

Types

  • m 42
  • s 6
  • a 4
  • el 2
  • More… Less…