Search (166 results, page 1 of 9)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Kaushik, S.K.: DDC 22 : a practical approach (2004) 0.04
    0.03596279 = product of:
      0.05394418 = sum of:
        0.017254427 = weight(_text_:in in 1842) [ClassicSimilarity], result of:
          0.017254427 = score(doc=1842,freq=34.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.24786183 = fieldWeight in 1842, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1842)
        0.036689755 = product of:
          0.07337951 = sum of:
            0.07337951 = weight(_text_:22 in 1842) [ClassicSimilarity], result of:
              0.07337951 = score(doc=1842,freq=14.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.4094577 = fieldWeight in 1842, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A system of library classification that flashed across the inquiring mind of young Melvil Louis Kossuth Dewey (known as Melvil Dewey) in 1873 is still the most popular classification scheme.. The modern library classification begins with Dewey Decimal Classification (DDC). Melvil Dewey devised DDC in 1876. DDC has is credit of 128 years of boudless success. The DDC is being taught as a practical subject throughout the world and it is being used in majority of libraries in about 150 countries. This is the result of continuous revision that 22nd Edition of DDC has been published in July 2003. No other classification scheme has published so many editions. Some welcome changes have been made in DDC 22. To reduce the Christian bias in 200 religion, the numbers 201 to 209 have been devoted to specific aspects of religion. In the previous editions these numbers were devoted to Christianity. to enhance the classifier's efficiency, Table 7 has been removed from DDC 22 and the provision of adding group of persons is made by direct use of notation already available in schedules and in notation -08 from Table 1 Standard Subdivision. The present book is an attempt to explain, with suitable examples, the salient provisions of DDC 22. The book is written in simple language so that the students may not face any difficulty in understanding what is being explained. The examples in the book are explained in a step-by-step procedure. It is hoped that this book will prove of great help and use to the library professionals in general and library and information science students in particular.
    Content
    1. Introduction to DDC 22 2. Major changes in DDC 22 3. Introduction to the schedules 4. Use of Table 1 : Standard Subdivisions 5. Use of Table 2 : Areas 6. Use of Table 3 : Subdivisions for the arts, for individual literatures, for specific literary forms 7. Use to Table 4 : Subdivisions of individual languages and language families 8. Use of Table 5 : Ethic and National groups 9. Use of Table 6 : Languages 10. Treatment of Groups of Persons
    Object
    DDC-22
  2. Langridge, D.W.: Classification: its kinds, systems, elements and application (1992) 0.03
    0.034039624 = product of:
      0.051059436 = sum of:
        0.011836439 = weight(_text_:in in 770) [ClassicSimilarity], result of:
          0.011836439 = score(doc=770,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.17003182 = fieldWeight in 770, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=770)
        0.039222997 = product of:
          0.07844599 = sum of:
            0.07844599 = weight(_text_:22 in 770) [ClassicSimilarity], result of:
              0.07844599 = score(doc=770,freq=4.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.4377287 = fieldWeight in 770, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=770)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    26. 7.2002 14:01:22
    Footnote
    Rez. in: Journal of documentation 49(1993) no.1, S.68-70. (A. Maltby); Journal of librarianship and information science 1993, S.108-109 (A.G. Curwen); Herald of library science 33(1994) nos.1/2, S.85 (P.N. Kaula); Knowledge organization 22(1995) no.1, S.45 (M.P. Satija)
    Series
    Topics in library and information studies
  3. Dahlberg, I.: Grundlagen universaler Wissensordnung : Probleme und Möglichkeiten eines universalen Klassifikationssystems des Wissens (1974) 0.03
    0.03008706 = product of:
      0.045130588 = sum of:
        0.010462033 = weight(_text_:in in 127) [ClassicSimilarity], result of:
          0.010462033 = score(doc=127,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15028831 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=127)
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
              0.069337115 = score(doc=127,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Zugleich Dissertation Univ. Düsseldorf. - Rez. in: ZfBB. 22(1975) S.53-57 (H.-A. Koch)
  4. Marcella, R.; Newton, R.: ¬A new manual of classification (1994) 0.03
    0.03008706 = product of:
      0.045130588 = sum of:
        0.010462033 = weight(_text_:in in 885) [ClassicSimilarity], result of:
          0.010462033 = score(doc=885,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15028831 = fieldWeight in 885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=885)
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
              0.069337115 = score(doc=885,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 885, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=885)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.178-179 (M.P. Satija); Journal of documentation 51(1995) no.4, S.437-439 (R. Brunt)
  5. Scott, M.L.: Dewey Decimal Classification, 22nd edition : a study manual and number building guide (2005) 0.03
    0.03008706 = product of:
      0.045130588 = sum of:
        0.010462033 = weight(_text_:in in 4594) [ClassicSimilarity], result of:
          0.010462033 = score(doc=4594,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15028831 = fieldWeight in 4594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=4594)
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 4594) [ClassicSimilarity], result of:
              0.069337115 = score(doc=4594,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 4594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4594)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This work has been fully updated for the 22nd edition of DDC, and is used as reference for the application of Dewey coding or as a course text in the Dewey System
    Object
    DDC-22
  6. Read, J.: Cataloguing without tears : managing knowledge in the information society (2003) 0.02
    0.024114786 = product of:
      0.036172178 = sum of:
        0.011071975 = weight(_text_:in in 4509) [ClassicSimilarity], result of:
          0.011071975 = score(doc=4509,freq=14.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15905021 = fieldWeight in 4509, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4509)
        0.025100201 = product of:
          0.050200403 = sum of:
            0.050200403 = weight(_text_:education in 4509) [ClassicSimilarity], result of:
              0.050200403 = score(doc=4509,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2082096 = fieldWeight in 4509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4509)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    It is a practical and authoritative guide to cataloguing for librarians, information scientists and information managers. It is intended to be used in conjunction with an internationally recognised standard to show how, firstly, cataloguing underpins all the other activities of an information service and, secondly, how to apply best practice in a variety of different situations.
    Content
    Key Features - Relates theory to practice and is written in an easy-to-read style - Includes guidance an subject cataloguing as well as descriptive cataloguing - Covers the use of ISBD and Dublin Core in descriptive cataloguing, rather than being tied exclusively to using AACR - Covers the principles of subject cataloguing, a topic which most non-librarians believe to be an integral part of cataloguing - Not only does the book describe the hows of cataloguing but goes a stage further by explaining why one might want to catalogue a particular item in a certain way The Author Jane Read has over 13 years' experience in academic libraries. She works as a cataloguing officer for The Higher Education Academy. Readership Librarians and informational professionals responsible for cataloguing materials (of any format). Knowledge managers will also find the book of interest. Contents Why bother to catalogue - what is a catalogue for, anticipating user needs, convincing your boss it is important What to catalogue -writing a cataloguing policy, what a catalogue record contains, the politics of cataloguing Who should catalogue - how long does it take to catalogue a book, skill sets needed, appropriate levels of staffing, organising time How to catalogue and not reinvent the wherl - choosing a records management system, international standards (AACR/MARC, ISBD, Dublin Core), subject cataloguing, and authority control Is it a book, is it a journal - distinguishing between formats, the'awkward squad', loose-leaf files, websites and skeletons What's a strange attractor? Cataloguing subjects you know nothing about -finding the right subject headings, verifying your information ki an ne lit pas le francais: unkriown languages and how to deal with them - what language is it, transcribing non-Roman alphabets, understanding the subject Special cases - rare books and archival collections, children's books, electronic media Resources for cataloguers - reference books, online discussion lists, conferences, bibliography
  7. Haller, K.; Popst, H.: Katalogisierung nach den RAK-WB : eine Einführung in die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (2003) 0.02
    0.021419886 = product of:
      0.032129828 = sum of:
        0.014795548 = weight(_text_:in in 1811) [ClassicSimilarity], result of:
          0.014795548 = score(doc=1811,freq=16.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21253976 = fieldWeight in 1811, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1811)
        0.017334279 = product of:
          0.034668557 = sum of:
            0.034668557 = weight(_text_:22 in 1811) [ClassicSimilarity], result of:
              0.034668557 = score(doc=1811,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.19345059 = fieldWeight in 1811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1811)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Diese Einführung vermittelt alle wichtigen Kenntnisse über die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (RAK-WB), das maßgebliche deutsche Regelwerk für die alphabetische Katalogisierung. In die neue, sechste Ausgabe ist der aktuelle Regelwerksstand eingearbeitet. Das betrifft vor allem die Ansetzung von Personennamen. Nach dem einleitenden Kapitel über die Funktion, die äußeren Formen und die grundlegenden Begriffe des alphabetischen Katalogs werden die verschiedenen Eintragungsarten und ihre formale Gestaltung beschrieben. Das Hauptgewicht der Darstellung liegt bei den Bestimmungen über Haupt- und Nebeneintragungen unter Personennamen, Sachtiteln und Körperschaftsnomen sowie deren Ansetzung. Neben den Grundregeln für die Kotologisierung ein- und mehrbändiger Einzelwerke, Sammlungen und Sammelwerke werden auch die Sonderregeln für Kongressberichte, Bildbände, Bilderbücher, Kunstbände, Ausstellungskataloge, Hochschulschriften, Gesetze, Kommentare, Loseblattausgaben, Schulbücher, Reports und Normen behandelt. Die Bestimmungen für Vorlagen mit Einheitssachtiteln, Neben- und Paralleltiteln werden ebenso ausführlich dargestellt wie die schwierigen Fälle fortlaufender Sammelwerke mit Unterreihen. Alle Regeln werden durch Beispiele erläutert. Die Vorlagen werden meist mit ihrer Haupttitelseite wiedergegeben und den vollständigen Lösungen sowie erläuternden Texten gegenübergestellt. Die einschlägigen Paragraphen des Regelwerks werden im Text jeweils in Klammern angegeben. Auf die Belange der moschinenlesbaren Katalogisierung in Online-Datenbanken wird eingegangen. Das Kategorisieren der Katalogdaten wird in einem eigenen Kapitel dargestellt. Beispiele von Aufnahmen gemäß dem Maschinellen Austauschformat für Bibliotheken (MAB2) sollen helfen, die Grundbegriffe der gefelderten Erfassung zu verstehen. Das Lehrbuch Katalogisierung nach den RAK-WB ist eine unverzichtbare Grundlage für Studierende der bibliothekarischen Lehrinstitute und angehende Bibliothekare in der praktischen Ausbildung sowie für das Selbststudium und für die Weiterbildung bereits im Beruf stehender Bibliothekare.
    Date
    17. 6.2015 15:22:06
  8. Chowdhury, G.G.: Introduction to modern information retrieval (1999) 0.02
    0.021115731 = product of:
      0.031673595 = sum of:
        0.010872464 = weight(_text_:in in 4902) [ClassicSimilarity], result of:
          0.010872464 = score(doc=4902,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1561842 = fieldWeight in 4902, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4902)
        0.020801133 = product of:
          0.041602265 = sum of:
            0.041602265 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
              0.041602265 = score(doc=4902,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.23214069 = fieldWeight in 4902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4902)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Enthält die Kapitel: 1. Basic concepts of information retrieval systems, 2. Database technology, 3. Bibliographic formats, 4. Subject analysis and representation, 5. Automatic indexing and file organization, 6. Vocabulary control, 7. Abstracts and abstracting, 8. Searching and retrieval, 9. Users of information retrieval, 10. Evaluation of information retrieval systems, 11. Evaluation experiments, 12. Online information retrieval, 13. CD-ROM information retrieval, 14. Trends in CD-ROM and online information retrieval, 15. Multimedia information retrieval, 16. Hypertext and hypermedia systems, 17. Intelligent information retrieval, 18. Natural language processing and information retrieval, 19. Natural language interfaces, 20. Natural language text processing and retrieval systems, 21. Problems and prospects of natural language processing systems, 22. The Internet and information retrieval, 23. Trends in information retrieval.
    Footnote
    Rez. in: Program 34(2000) no.2, S.231-232 (B.C. Vickery)
  9. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.02
    0.01854114 = product of:
      0.02781171 = sum of:
        0.012281752 = weight(_text_:in in 6119) [ClassicSimilarity], result of:
          0.012281752 = score(doc=6119,freq=90.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1764288 = fieldWeight in 6119, product of:
              9.486833 = tf(freq=90.0), with freq of:
                90.0 = termFreq=90.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.015529957 = product of:
          0.031059913 = sum of:
            0.031059913 = weight(_text_:education in 6119) [ClassicSimilarity], result of:
              0.031059913 = score(doc=6119,freq=4.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.12882312 = fieldWeight in 6119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.178-179 (M.-Y. Kan): "In their latest book, Chowdhury and Chowdhury have written an introductory text an digital libraries, primarily targeting "students researching digital libraries as part of information and library science, as well as computer science, courses" (p. xiv). It is an ambitious work that surveys many of the broad topics in digital libraries (DL) while highlighting completed and ongoing DL research in many parts of the world. With the revamping of Library and Information Science (LIS) Curriculums to focus an information technology, many LIS schools are now teaching DL topics either as an independent course or as part of an existing one. Instructors of these courses have in many cases used supplementary texts and compeed readers from journals and conference materials, possibly because they feel that a suitable textbook has yet to be written. A solid, principal textbook for digital libraries is sorely needed to provide a critical, evaluative synthesis of DL literature. It is with this in mind that I believe Introduction to Digital Libraries was written. An introductory text an any Cross-disciplinary topic is bound to have conflicting limitations and expectations from its adherents who come from different backgrounds. This is the rase in the development of DL Curriculum, in which both LIS and computer science schools are actively involved. Compiling a useful secondary source in such Cross-disciplinary areas is challenging; it requires that jargon from each contributing field be carefully explained and respected, while providing thought-provoking material to broaden student perspectives. In my view, the book's breadth certainly encompasses the whole of what an introduction to DL needs, but it is hampered by a lack of focus from catering to such disparate needs. For example, LIS students will need to know which key aspects differentiate digital library metadata from traditional metadata while computer science students will need to learn the basics of vector spare and probabilistic information retrieval. However, the text does not give enough detail an either subject and thus even introductory students will need to go beyond the book and consult primary sources. In this respect, the book's 307 pages of content are too short to do justice to such a broad field of study.
    This book covers all of the primary areas in the DL Curriculum as suggested by T. Saracevic and M. Dalbello's (2001) and A. Spink and C. Cool's (1999) D-Lib articles an DL education. In fact, the book's coverage is quite broad; it includes a Superset of recommended topics, offering a chapter an professional issues (recommended in Spink and Cool) as well as three chapters devoted to DL research. The book comes with a comprehensive list of references and an index, allowing readers to easily locate a specific topic or research project of interest. Each chapter also begins with a short outline of the chapter. As an additional plus, the book is quite heavily Cross-referenced, allowing easy navigation across topics. The only drawback with regard to supplementary materials is that it Lacks a glossary that world be a helpful reference to students needing a reference guide to DL terminology. The book's organization is well thought out and each chapter stands independently of the others, facilitating instruction by parts. While not officially delineated into three parts, the book's fifteen chapters are logically organized as such. Chapters 2 and 3 form the first part, which surveys various DLs and DL research initiatives. The second and core part of the book examines the workings of a DL along various dimensions, from its design to its eventual implementation and deployment. The third part brings together extended topics that relate to a deployed DL: its preservation, evaluation, and relationship to the larger social content. Chapter 1 defines digital libraries and discusses the scope of the materials covered in the book. The authors posit that the meaning of digital library is best explained by its sample characteristics rather than by definition, noting that it has largely been shaped by the melding of the research and information professions. This reveals two primary facets of the DL: an "emphasis an digital content" coming from an engineering and computer science perspective as well as an "emphasis an services" coming from library and information professionals (pp. 4-5). The book's organization mirrors this dichotomy, focusing an the core aspects of content in the earlier chapters and retuming to the service perspective in later chapters.
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  10. Foskett, A.C.: ¬The subject approach to information (1996) 0.02
    0.018052235 = product of:
      0.027078353 = sum of:
        0.0062772194 = weight(_text_:in in 749) [ClassicSimilarity], result of:
          0.0062772194 = score(doc=749,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.09017298 = fieldWeight in 749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=749)
        0.020801133 = product of:
          0.041602265 = sum of:
            0.041602265 = weight(_text_:22 in 749) [ClassicSimilarity], result of:
              0.041602265 = score(doc=749,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.23214069 = fieldWeight in 749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    25. 7.2002 21:22:31
    Footnote
    Rez. in: Managing information. 3(1996) no.10, S.47 (B. Bater); Journal of documentation. 53(1997) no.2, S.203-205 (R. Brunt); Library review. 46(1997) nos.3/4.282-283 (D. Anderson); Journal of academic librarianship. 23(1997) no.1, S.59 (C.M. Jagodzinski); Knowledge organization 24(1997) no.4, S.259-260 (M.P. Satija)
  11. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.02
    0.017135909 = product of:
      0.025703862 = sum of:
        0.011836439 = weight(_text_:in in 1767) [ClassicSimilarity], result of:
          0.011836439 = score(doc=1767,freq=16.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.17003182 = fieldWeight in 1767, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.013867423 = product of:
          0.027734846 = sum of:
            0.027734846 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.027734846 = score(doc=1767,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
    Im fünften Kapitel "Information Extraction" geht Nohr auf eine Problemstellung ein, die in der Fachwelt eine noch stärkere Betonung verdiente: "Die stetig ansteigende Zahl elektronischer Dokumente macht neben einer automatischen Erschließung auch eine automatische Gewinnung der relevanten Informationen aus diesen Dokumenten wünschenswert, um diese z.B. für weitere Bearbeitungen oder Auswertungen in betriebliche Informationssysteme übernehmen zu können." (S. 103) "Indexierung und Retrievalverfahren" als voneinander abhängige Verfahren werden im sechsten Kapitel behandelt. Hier stehen Relevance Ranking und Relevance Feedback sowie die Anwendung informationslinguistischer Verfahren in der Recherche im Mittelpunkt. Die "Evaluation automatischer Indexierung" setzt den thematischen Schlusspunkt. Hier geht es vor allem um die Oualität einer Indexierung, um gängige Retrievalmaße in Retrievaltest und deren Einssatz. Weiterhin ist hervorzuheben, dass jedes Kapitel durch die Vorgabe von Lernzielen eingeleitet wird und zu den jeweiligen Kapiteln (im hinteren Teil des Buches) einige Kontrollfragen gestellt werden. Die sehr zahlreichen Beispiele aus der Praxis, ein Abkürzungsverzeichnis und ein Sachregister erhöhen den Nutzwert des Buches. Die Lektüre förderte beim Rezensenten das Verständnis für die Zusammenhänge von BID-Handwerkzeug, Wirtschaftsinformatik (insbesondere Data Warehousing) und Künstlicher Intelligenz. Die "Grundlagen der automatischen Indexierung" sollte auch in den bibliothekarischen Studiengängen zur Pflichtlektüre gehören. Holger Nohrs Lehrbuch ist auch für den BID-Profi geeignet, um die mehr oder weniger fundierten Kenntnisse auf dem Gebiet "automatisches Indexieren" schnell, leicht verständlich und informativ aufzufrischen."
  12. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.02
    0.016488036 = product of:
      0.024732053 = sum of:
        0.007397774 = weight(_text_:in in 5773) [ClassicSimilarity], result of:
          0.007397774 = score(doc=5773,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.10626988 = fieldWeight in 5773, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.017334279 = product of:
          0.034668557 = sum of:
            0.034668557 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.034668557 = score(doc=5773,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Suchmaschinen im World Wide Web wird nachgesagt, dass sie - insbesondere im Vergleich zur Retrievalsoftware kommerzieller Online-Archive suboptimale Methoden und Werkzeuge einsetzen. Elaborierte befehlsorientierte Retrievalsysteme sind vom Laien gar nicht und vom Professional nur dann zu bedienen, wenn man stets damit arbeitet. Die Suchsysteme einiger "independents", also isolierter Informationsproduzenten im Internet, zeichnen sich durch einen Minimalismus aus, der an den Befehlsumfang anfangs der 70er Jahre erinnert. Retrievalsoftware in Intranets, wenn sie denn überhaupt benutzt wird, setzt fast ausnahmslos auf automatische Methoden von Indexierung und Retrieval und ignoriert dabei nahezu vollständig dokumentarisches Know how. Suchmaschinen bzw. Retrievalsysteme - wir wollen beide Bezeichnungen synonym verwenden - bereiten demnach, egal wo sie vorkommen, Schwierigkeiten. An ihrer Qualität wird gezweifelt. Aber was heißt überhaupt: Qualität von Suchmaschinen? Was zeichnet ein gutes Retrievalsystem aus? Und was fehlt einem schlechten? Wir wollen eine Liste von Kriterien entwickeln, die für gutes Suchen (und Finden!) wesentlich sind. Es geht also ausschließlich um Quantität und Qualität der Suchoptionen, nicht um weitere Leistungsindikatoren wie Geschwindigkeit oder ergonomische Benutzerschnittstellen. Stillschweigend vorausgesetzt wirdjedoch der Abschied von ausschließlich befehlsorientierten Systemen, d.h. wir unterstellen Bildschirmgestaltungen, die die Befehle intuitiv einleuchtend darstellen. Unsere Checkliste enthält nur solche Optionen, die entweder (bei irgendwelchen Systemen) schon im Einsatz sind (und wiederholt damit zum Teil Altbekanntes) oder deren technische Realisierungsmöglichkeit bereits in experimentellen Umgebungen aufgezeigt worden ist. insofern ist die Liste eine Minimalforderung an Retrievalsysteme, die durchaus erweiterungsfähig ist. Gegliedert wird der Kriterienkatalog nach (1.) den Basisfunktionen zur Suche singulärer Datensätze, (2.) den informetrischen Funktionen zur Charakterisierunggewisser Nachweismengen sowie (3.) den Kriterien zur Mächtigkeit automatischer Indexierung und natürlichsprachiger Suche
    Source
    Password. 2000, H.5, S.22-31
  13. Bowman, J.H.: Essential Dewey (2005) 0.02
    0.016219638 = product of:
      0.024329456 = sum of:
        0.010462033 = weight(_text_:in in 359) [ClassicSimilarity], result of:
          0.010462033 = score(doc=359,freq=50.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.15028831 = fieldWeight in 359, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.015625 = fieldNorm(doc=359)
        0.013867423 = product of:
          0.027734846 = sum of:
            0.027734846 = weight(_text_:22 in 359) [ClassicSimilarity], result of:
              0.027734846 = score(doc=359,freq=8.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.15476047 = fieldWeight in 359, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=359)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this book, John Bowman provides an introduction to the Dewey Decimal Classification suitable either for beginners or for librarians who are out of practice using Dewey. He outlines the content and structure of the scheme and then, through worked examples using real titles, Shows readers how to use it. Most chapters include practice exercises, to which answers are given at the end of the book. A particular feature of the book is the chapter dealing with problems of specific parts of the scheme. Later chapters offer advice and how to cope with compound subjects, and a brief introduction to the Web version of Dewey.
    Content
    "The contents of the book cover: This book is intended as an introduction to the Dewey Decimal Classification, edition 22. It is not a substitute for it, and I assume that you have it, all four volumes of it, by you while reading the book. I have deliberately included only a short section an WebDewey. This is partly because WebDewey is likely to change more frequently than the printed version, but also because this book is intended to help you use the scheme regardless of the manifestation in which it appears. If you have a subscription to WebDewey and not the printed volumes you may be able to manage with that, but you may then find my references to volumes and page numbers baffling. All the examples and exercises are real; what is not real is the idea that you can classify something without seeing more than the title. However, there is nothing that I can do about this, and I have therefore tried to choose examples whose titles adequately express their subject-matter. Sometimes when you look at the 'answers' you may feel that you have been cheated, but I hope that this will be seldom. Two people deserve special thanks. My colleague Vanda Broughton has read drafts of the book and made many suggestions. Ross Trotter, chair of the CILIP Dewey Decimal Classification Committee, who knows more about Dewey than anyone in Britain today, has commented extensively an it and as far as possible has saved me from error, as well as suggesting many improvements. What errors remain are due to me alone. Thanks are also owed to OCLC Online Computer Library Center, for permission to reproduce some specimen pages of DDC 22. Excerpts from the Dewey Decimal Classification are taken from the Dewey Decimal Classification and Relative Index, Edition 22 which is Copyright 2003 OCLC Online Computer Library Center, Inc. DDC, Dewey, Dewey Decimal Classification and WebDewey are registered trademarks of OCLC Online Computer Library Center, Inc."
    Footnote
    Rez. in: KO 31(2005) no.4, S.259-260 (J.E. Leide):
    "The title says it all. The book contains the essentials for a fundamental understanding of the complex world of the Dewey Decimal Classification. It is clearly written and captures the essence in a concise and readable style. Is it a coincidence that the mysteries of the Dewey Decimal System are revealed in ten easy chapters? The typography and layout are clear and easy to read and the perfect binding withstood heavy use. The exercises and answers are invaluable in illustrating the points of the several chapters. The book is well structured. Chapter 1 provides an "Introduction and background" to classification in general and Dewey in particular. Chapter 2 describes the "Outline of the scheme" and the conventions in the schedules and tables. Chapter 3 covers "Simple subjects" and introduces the first of the exercises. Chapters 4 and 5 describe "Number-building" with "standard subdivisions" in the former and "other methods" in the latter. Chapter 6 provides an excellent description of "Preference order" and Chapter 7 deals with "Exceptions and options." Chapter 8 "Special subjects," while no means exhaustive, gives a thorough analysis of problems with particular parts of the schedules from "100 Philosophy" to "910 Geography" with a particular discussion of "'Persons treatment"' and "Optional treatment of biography." Chapter 9 treats "Compound subjects." Chapter 10 briefly introduces WebDewey and provides the URL for the Web Dewey User Guide http://www.oclc.org/support/documentation/dewey/ webdewey_userguide/; the section for exercises says: "You are welcome to try using WebDewey an the exercises in any of the preceding chapters." Chapters 6 and 7 are invaluable at clarifying the options and bases for choice when a work is multifaceted or is susceptible of classification under different Dewey Codes. The recommendation "... not to adopt options, but use the scheme as instructed" (p. 71) is clearly sound. As is, "What is vital, of course, is that you keep a record of the decisions you make and to stick to them. Any option Chosen must be used consistently, and not the whim of the individual classifier" (p. 71). The book was first published in the UK and the British overtones, which may seem quite charming to a Canadian, may be more difficult for readers from the United States. The correction of Dewey's spelling of Labor to Labo [u] r (p. 54) elicited a smile for the championing of lost causes and some relief that we do not have to cope with 'simplified speling.' The down-to-earth opinions of the author, which usually agree with those of the reviewer, add savour to the text and enliven what might otherwise have been a tedious text indeed. However, in the case of (p. 82):
    Dewey requires that you classify bilingual dictionaries that go only one way with the language in which the entries are written, which means that an English-French dictionary has to go with English, not French. This is very unhelpful and probably not widely observed in English-speaking libraries ... one may wonder (the Norman conquest not withstanding) why Bowman feels that it is more useful to class the book in the language of the definition rather than that of the entry words - Dewey's requirement to class a dictionary of French words with English definitions with French language dictionaries seems quite reasonable. In the example of Anglo-French relations before the second World War (p. 42) the principle of adding two notations from Table 2 is succinctly illustrated but there is no discussion of why the notation is -41044 rather than -44041. Is it because the title is 'Anglo'-'French', or because -41 precedes -44, or because it is assumed that the book is being catalogued for an English library that wished to keep all Anglo relations together? The bibliography lists five classic works and the School Library Association (UK) website. The index provides additional assistance in locating topics; however it is not clear whether it is intended to be a relative index with terms in direct order or nouns with subdivisions. There are a few Cross-references and some double posting. The instruction ") ( means 'compared with"' (p. 147) seems particularly twee since the three occasions in the index could easily have included the text "compared with;" the saving of space is not worth the potential confusion. There is no entry for "displaced standard subdivisions" one must look under "standard subdivisions" with the subdivision "displaced." There is no entry for "approximating the whole," although "standing room," "class here notes" and "including notes" are listed. Both "rule of zero" and "zero" with the subdivision "rule of" are included. The "rule of zero" is really all you need to know about Dewey (p. 122): Something which can be useful if you are really stuck is to consider the possibilities one digit at a time, and never put 0 if you can put something more specific. Be as specific as possible, but if you can't say something good, say nothing. This slim volume clearly follows this advice."
    Weitere Rez. in: Mitt. VÖB 59(2006) H.1, S.70-72 (M. Sandner): "All das wäre in Summe also nachahmenswert? Ja! Ein ähnliches Lehrbuch in deutscher Sprache vorzulegen und mit Beispielen aus dem deutschsprachigen Raum auszustatten wäre ein lohnendes Ziel."
    Object
    DDC-22
  14. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.02
    0.01617866 = product of:
      0.04853598 = sum of:
        0.04853598 = product of:
          0.09707196 = sum of:
            0.09707196 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
              0.09707196 = score(doc=3247,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.5416616 = fieldWeight in 3247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3247)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Object
    DDC-22
  15. Vonhoegen, H.: Einstieg in XML (2002) 0.02
    0.015808895 = product of:
      0.023713343 = sum of:
        0.011579349 = weight(_text_:in in 4002) [ClassicSimilarity], result of:
          0.011579349 = score(doc=4002,freq=20.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.16633868 = fieldWeight in 4002, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4002)
        0.012133995 = product of:
          0.02426799 = sum of:
            0.02426799 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.02426799 = score(doc=4002,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Dieses Buch richtet sich an alle, die eine kompetente Einführung in XML benötigen - praxisnah und verständlich aufbereitet. Die referenzartige Darstellung der eXtensible Markup Language XML, ihrer Dialekte und Technologien wird dabei durch viele Beispiele vertieft. »Einstieg in XML« ist kein theoretisches Buch zu verschiedenen Standards der Sprachfamilie XML. Hier bekommen Sie in konzentrierter Form genau das, was Sie zur Entwicklung eigener XML-Lösungen brauchen. Die im Buch enthaltene CD enthält alle nötigen Tools, um sofort starten zu können.
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  16. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.02
    0.015037568 = product of:
      0.022556351 = sum of:
        0.012155785 = weight(_text_:in in 729) [ClassicSimilarity], result of:
          0.012155785 = score(doc=729,freq=30.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.17461926 = fieldWeight in 729, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.010400566 = product of:
          0.020801133 = sum of:
            0.020801133 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.020801133 = score(doc=729,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In diesem Buch sollen die grundlegenden Techniken zur Erstellung, Anwendung und nicht zuletzt Darstellung von XML-Dokumenten erklärt und demonstriert werden. Die wichtigste und vornehmste Aufgabe dieses Buches ist es jedoch, die Grundlagen von XML, wie sie vom World Wide Web Consortium (W3C) festgelegt sind, darzustellen. Das W3C hat nicht nur die Entwicklung von XML initiiert und ist die zuständige Organisation für alle XML-Standards, es werden auch weiterhin XML-Spezifikationen vom W3C entwickelt. Auch wenn immer mehr Vorschläge für neue XML-basierte Techniken aus dem weiteren Umfeld der an XML Interessierten kommen, so spielt doch weiterhin das W3C die zentrale und wichtigste Rolle für die Entwicklung von XML. Der Schwerpunkt dieses Buches liegt darin, zu lernen, wie man XML als tragende Technologie in echten Alltags-Anwendungen verwendet. Wir wollen Ihnen gute Design-Techniken vorstellen und demonstrieren, wie man XML-fähige Anwendungen mit Applikationen für das WWW oder mit Datenbanksystemen verknüpft. Wir wollen die Grenzen und Möglichkeiten von XML ausloten und eine Vorausschau auf einige "nascent"-Technologien werfen. Egal ob Ihre Anforderungen sich mehr an dem Austausch von Daten orientieren oder bei der visuellen Gestaltung liegen, dieses Buch behandelt alle relevanten Techniken. jedes Kapitel enthält ein Anwendungsbeispiel. Da XML eine Plattform-neutrale Technologie ist, werden in den Beispielen eine breite Palette von Sprachen, Parsern und Servern behandelt. Jede der vorgestellten Techniken und Methoden ist auf allen Plattformen und Betriebssystemen relevant. Auf diese Weise erhalten Sie wichtige Einsichten durch diese Beispiele, auch wenn die konkrete Implementierung nicht auf dem von Ihnen bevorzugten System durchgeführt wurde.
    Dieses Buch wendet sich an alle, die Anwendungen auf der Basis von XML entwickeln wollen. Designer von Websites können neue Techniken erlernen, wie sie ihre Sites auf ein neues technisches Niveau heben können. Entwickler komplexerer Software-Systeme und Programmierer können lernen, wie XML in ihr System passt und wie es helfen kann, Anwendungen zu integrieren. XML-Anwendungen sind von ihrer Natur her verteilt und im Allgemeinen Web-orientiert. Dieses Buch behandelt nicht verteilte Systeme oder die Entwicklung von Web-Anwendungen, sie brauchen also keine tieferen Kenntnisse auf diesen Gebieten. Ein allgemeines Verständnis für verteilte Architekturen und Funktionsweisen des Web wird vollauf genügen. Die Beispiele in diesem Buch verwenden eine Reihe von Programmiersprachen und Technologien. Ein wichtiger Bestandteil der Attraktivität von XML ist seine Plattformunabhängigkeit und Neutralität gegenüber Programmiersprachen. Sollten Sie schon Web-Anwendungen entwickelt haben, stehen die Chancen gut, dass Sie einige Beispiele in Ihrer bevorzugten Sprache finden werden. Lassen Sie sich nicht entmutigen, wenn Sie kein Beispiel speziell für Ihr System finden sollten. Tools für die Arbeit mit XML gibt es für Perl, C++, Java, JavaScript und jede COM-fähige Sprache. Der Internet Explorer (ab Version 5.0) hat bereits einige Möglichkeiten zur Verarbeitung von XML-Dokumenten eingebaut. Auch der Mozilla-Browser (der Open-Source-Nachfolger des Netscape Navigators) bekommt ähnliche Fähigkeiten. XML-Tools tauchen auch zunehmend in großen relationalen Datenbanksystemen auf, genau wie auf Web- und Applikations-Servern. Sollte Ihr System nicht in diesem Buch behandelt werden, lernen Sie die Grundlagen und machen Sie sich mit den vorgestellten Techniken aus den Beispielen vertraut.
    Das erworbene Wissen sollte sich dann auch auf jedem anderen Betriebssystem umsetzen lassen. Jedes einzelne Kapitel wird sich mit einem bestimmten XML Thema beschäftigen. Kapitel 1 bietet eine Einführung in die Konzepte von XML. Kapitel 2 und 3 sind eng verknüpft, da sie fundamentale Grundlagen behandeln. Kapitel 2 startet mit der Syntax und den grundlegenden Regeln von XML. Kapitel 3 führt dann weiter und stellt Werkzeuge zur Erstellung eigener, problembezogener XML-DTDs vor. Die verbleibenden Kapitel jedoch sind weitestgehend, im Bezug auf die vorgestellten Techniken und Technologien, in sich abgeschlossen. Die wichtigsten Kapitel werden durch ein verbindendes Beispiel zusammengehalten. Das Beispiel geht davon aus, dass ein Verleger seinen Bücher-Katalog mittels XML präsentieren will. Wir werden damit beginnen, Regeln für die Beschreibung von Büchern in einem Katalog festzulegen. Auf der Grundlage dieser Regeln werden wir dann zeigen, wie jede einzelne Technik uns dabei hilft, XML-Anwendungen zu erstellen. Sie werden sehen, wie dieser Katalog sich in ein Dokument umwandeln lässt, wie solche Dokumente manipuliert werden können und wie man aus Programmen heraus auf sie zugreifen kann. Wir werden auch zeigen, wie man die Inhalte der Dokumente für den Leser aufbereitet. Da solche Anwendungen in der Praxis nicht in einem Vakuum existieren, werden Sie auch sehen, wie XML-Anwendungen mit Datenbanken interagieren. Es werden sich verschiedene thematische Stränge durch das Buch ziehen, die wir im folgenden Abschnitt vorstellen möchten. Damit sollten Sie in der Lage sein, gezielt für Sie wichtige Themen herauszugreifen und andere Abschnitte auszulassen
    Date
    22. 6.2005 15:12:11
  17. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.01
    0.014077155 = product of:
      0.021115731 = sum of:
        0.0072483094 = weight(_text_:in in 3487) [ClassicSimilarity], result of:
          0.0072483094 = score(doc=3487,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1041228 = fieldWeight in 3487, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3487)
        0.013867423 = product of:
          0.027734846 = sum of:
            0.027734846 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.027734846 = score(doc=3487,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Information: Wissenschaft & Praxis 56(2005) H.5/6, S.337 (W. Ratzek): "Bettina Brühl legt mit "Thesauri und Klassifikationen" ein Fleißarbeit vor. Das Buch mit seiner Auswahl von über 150 Klassifikationen und Thesauri aus Naturwissenschaft, Technik, Wirtschaft und Patenwesen macht es zu einem brauchbaren Nachschlagewerk, zumal auch ein umfassender Index nach Sachgebieten, nach Datenbanken und nach Klassifikationen und Thesauri angeboten wird. Nach einer 13-seitigen Einführung (Kapitel 1 und 2) folgt mit dem 3. Kapitel die "Darstellung von Klassifikationen und Thesauri", im wesentlichen aus den Beschreibungen der Hersteller zusammengestellt. Hier werden Dokumentationssprachen der Fachgebiete - Naturwissenschaften (3.1) und deren Spezialisierungen wie zum Beispiel "Biowissenschaften und Biotechnologie", "Chemie" oder "Umwelt und Ökonomie", aber auch "Mathematik und Informatik" (?) auf 189 Seiten vorgestellt, - Technik mit zum Beispiel "Fachordnung Technik", "Subject Categories (INIS/ ETDE) mit 17 Seiten verhältnismäßig knapp abgehandelt, - Wirtschaft mit "Branchen-Codes", "Product-Codes", "Länder-Codes"",Fachklas-sifikationen" und "Thesauri" ausführlich auf 57 Seiten präsentiert, - Patente und Normen mit zum Beispiel "Europäische Patentklassifikation" oder "International Patent Classification" auf 33 Seiten umrissen. Jedes Teilgebiet wird mit einer kurzen Beschreibung eingeleitet. Danach folgen die jeweiligen Beschreibungen mit den Merkmalen: "Anschrift des Erstellers", "Themen-gebiet(e)", "Sprache", "Verfügbarkeit", "An-wendung" und "Ouelle(n)". "Das Buch wendet sich an alle Information Professionals, die Dokumentationssprachen aufbauen und nutzen" heißt es in der Verlagsinformation. Zwar ist es nicht notwendig, die informationswissenschaftlichen Aspekte der Klassifikationen und Thesauri abzuhandeln, aber ein Hinweis auf die Bedeutung der Information und Dokumentation und/oder der Informationswissenschaft wäre schon angebracht, um in der Welt der Informations- und Wissenswirtschaft zu demonstrieren, welchen Beitrag unsere Profession leistet. Andernfalls bleibt das Blickfeld eingeschränkt und der Anschluss an neuere Entwicklungen ausgeblendet. Dieser Anknüpfungspunkt wäre beispielsweise durch einen Exkurs über Topic Map/Semantic Web gegeben. Der Verlag liefert mit der Herausgabe die ses Kompendiums einen nützlichen ersten Baustein zu einem umfassenden Verzeichnis von Thesauri und Klassifikationen."
    Series
    Materialien zur Information und Dokumentation; Bd.22
  18. Kaiser, U.: Handbuch Internet und Online Dienste : der kompetente Reiseführer für das digitale Netz (1996) 0.01
    0.013867422 = product of:
      0.041602265 = sum of:
        0.041602265 = product of:
          0.08320453 = sum of:
            0.08320453 = weight(_text_:22 in 4589) [ClassicSimilarity], result of:
              0.08320453 = score(doc=4589,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.46428138 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Series
    Heyne Business; 22/1019
  19. Kumar, K.: Theory of classification (1989) 0.01
    0.013867422 = product of:
      0.041602265 = sum of:
        0.041602265 = product of:
          0.08320453 = sum of:
            0.08320453 = weight(_text_:22 in 6774) [ClassicSimilarity], result of:
              0.08320453 = score(doc=6774,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.46428138 = fieldWeight in 6774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    25. 3.2019 18:15:22
  20. Gralla, P.: So funktioniert das Internet : ein visueller Streifzug durch das Internet (1998) 0.01
    0.011556186 = product of:
      0.034668557 = sum of:
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 667) [ClassicSimilarity], result of:
              0.069337115 = score(doc=667,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=667)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    15. 7.2002 20:48:22

Languages

  • e 104
  • d 61
  • f 1
  • More… Less…

Types

  • m 146
  • s 11
  • a 9
  • el 5
  • ? 1
  • h 1
  • x 1
  • More… Less…

Subjects

Classifications