Search (42 results, page 2 of 3)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  • × year_i:[2000 TO 2010}
  1. Broughton, V.: Essential Library of Congress Subject Headings (2009) 0.00
    0.0020369943 = product of:
      0.018332949 = sum of:
        0.018332949 = weight(_text_:of in 395) [ClassicSimilarity], result of:
          0.018332949 = score(doc=395,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 395, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=395)
      0.11111111 = coord(1/9)
    
    Abstract
    LCSH are increasingly seen as 'the' English language controlled vocabulary, despite their lack of a theoretical foundation, and their evident US bias. In mapping exercises between national subject heading lists, and in exercises in digital resource organization and management, LCSH are often chosen because of the lack of any other widely accepted English language standard for subject cataloguing. It is therefore important that the basic nature of LCSH, their advantages, and their limitations, are well understood both by LIS practitioners and those in the wider information community. Information professionals who attended library school before 1995 - and many more recent library school graduates - are unlikely to have had a formal introduction to LCSH. Paraprofessionals who undertake cataloguing are similarly unlikely to have enjoyed an induction to the broad principles of LCSH. There is currently no compact guide to LCSH written from a UK viewpoint, and this eminently practical text fills that gap. It features topics including: background and history of LCSH; subject heading lists; structure and display in LCSH; form of entry; application of LCSH; document analysis; main headings; topical, geographical and free-floating sub-divisions; building compound headings; name headings; headings for literature, art, music, history and law; and, LCSH in the online environment. There is a strong emphasis throughout on worked examples and practical exercises in the application of the scheme, and a full glossary of terms is supplied. No prior knowledge or experience of subject cataloguing is assumed. This is an indispensable guide to LCSH for practitioners and students alike from a well-known and popular author.
  2. Dittmann, H.; Hardy, J.: Learn Library of Congress Classification (2000) 0.00
    0.0020165213 = product of:
      0.018148692 = sum of:
        0.018148692 = weight(_text_:of in 6826) [ClassicSimilarity], result of:
          0.018148692 = score(doc=6826,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 6826, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6826)
      0.11111111 = coord(1/9)
    
    Abstract
    This book covers the skills necessary for a classifier using the LCC scheme, whether at a professional or paraprfessional level. It is equally suitable for use by library students in universities or colleges, and others who are studying classification by themselves, either with a specific goal or as part of their continuing professional development.
    Content
    Enthält die Kapitel: Introduction to classification - Introduction to LCC - Structure of LCC - Building a call number - Tables - Shelving - Classification Plus - More practice - Excercises - Answers
    Footnote
    Rez. in: Journal of documentation 57(2001) no.3, S.453-454 (E. Patterson)
    LCSH
    Classification, Library of Congress
    Subject
    Classification, Library of Congress
  3. Ladyman, J.: Understanding philosophy of science (2002) 0.00
    0.0019502735 = product of:
      0.017552461 = sum of:
        0.017552461 = weight(_text_:of in 1835) [ClassicSimilarity], result of:
          0.017552461 = score(doc=1835,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28651062 = fieldWeight in 1835, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1835)
      0.11111111 = coord(1/9)
    
    Abstract
    Few can imagine a world without telephones or televisions; many depend an computers and the Internet as part of daily life. Without scientific theory, these developments would not have been possible. In this exceptionally clear and engaging introduction to the philosophy of science, James Ladyman explores the philosophical questions that arise when we reflect an the nature of the scientific method and the knowledge it produces. He discusses whether fundamental philosophical questions about knowledge and reality might be answered by science, and considers in detail the debate between realists and antirealists about the extent of scientific knowledge. Along the way, central topics in the philosophy of science, such as the demarcation of science from non-science, induction, confirmation and falsification, the relationship between theory and observation, and relativism, are all addressed. Important and complex current debates over underdetermination, inference to the best explanation and the implications of radical theory change are clarified and clearly explained for these new to the subject. The style is refreshing and unassuming, bringing to life the essential questions in the philosophy of science. Ideal for any student of philosophy or science, this book requires no previous knowledge of either discipline. It contains the following textbook features: - suggestions for further reading - cross-referencing with an extensive bibliography.
  4. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    0.0018862829 = product of:
      0.016976546 = sum of:
        0.016976546 = weight(_text_:of in 6119) [ClassicSimilarity], result of:
          0.016976546 = score(doc=6119,freq=168.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2771099 = fieldWeight in 6119, product of:
              12.961481 = tf(freq=168.0), with freq of:
                168.0 = termFreq=168.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.178-179 (M.-Y. Kan): "In their latest book, Chowdhury and Chowdhury have written an introductory text an digital libraries, primarily targeting "students researching digital libraries as part of information and library science, as well as computer science, courses" (p. xiv). It is an ambitious work that surveys many of the broad topics in digital libraries (DL) while highlighting completed and ongoing DL research in many parts of the world. With the revamping of Library and Information Science (LIS) Curriculums to focus an information technology, many LIS schools are now teaching DL topics either as an independent course or as part of an existing one. Instructors of these courses have in many cases used supplementary texts and compeed readers from journals and conference materials, possibly because they feel that a suitable textbook has yet to be written. A solid, principal textbook for digital libraries is sorely needed to provide a critical, evaluative synthesis of DL literature. It is with this in mind that I believe Introduction to Digital Libraries was written. An introductory text an any Cross-disciplinary topic is bound to have conflicting limitations and expectations from its adherents who come from different backgrounds. This is the rase in the development of DL Curriculum, in which both LIS and computer science schools are actively involved. Compiling a useful secondary source in such Cross-disciplinary areas is challenging; it requires that jargon from each contributing field be carefully explained and respected, while providing thought-provoking material to broaden student perspectives. In my view, the book's breadth certainly encompasses the whole of what an introduction to DL needs, but it is hampered by a lack of focus from catering to such disparate needs. For example, LIS students will need to know which key aspects differentiate digital library metadata from traditional metadata while computer science students will need to learn the basics of vector spare and probabilistic information retrieval. However, the text does not give enough detail an either subject and thus even introductory students will need to go beyond the book and consult primary sources. In this respect, the book's 307 pages of content are too short to do justice to such a broad field of study.
    This book covers all of the primary areas in the DL Curriculum as suggested by T. Saracevic and M. Dalbello's (2001) and A. Spink and C. Cool's (1999) D-Lib articles an DL education. In fact, the book's coverage is quite broad; it includes a Superset of recommended topics, offering a chapter an professional issues (recommended in Spink and Cool) as well as three chapters devoted to DL research. The book comes with a comprehensive list of references and an index, allowing readers to easily locate a specific topic or research project of interest. Each chapter also begins with a short outline of the chapter. As an additional plus, the book is quite heavily Cross-referenced, allowing easy navigation across topics. The only drawback with regard to supplementary materials is that it Lacks a glossary that world be a helpful reference to students needing a reference guide to DL terminology. The book's organization is well thought out and each chapter stands independently of the others, facilitating instruction by parts. While not officially delineated into three parts, the book's fifteen chapters are logically organized as such. Chapters 2 and 3 form the first part, which surveys various DLs and DL research initiatives. The second and core part of the book examines the workings of a DL along various dimensions, from its design to its eventual implementation and deployment. The third part brings together extended topics that relate to a deployed DL: its preservation, evaluation, and relationship to the larger social content. Chapter 1 defines digital libraries and discusses the scope of the materials covered in the book. The authors posit that the meaning of digital library is best explained by its sample characteristics rather than by definition, noting that it has largely been shaped by the melding of the research and information professions. This reveals two primary facets of the DL: an "emphasis an digital content" coming from an engineering and computer science perspective as well as an "emphasis an services" coming from library and information professionals (pp. 4-5). The book's organization mirrors this dichotomy, focusing an the core aspects of content in the earlier chapters and retuming to the service perspective in later chapters.
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  5. Kao, M.L.: Cataloging and classification for library technicians (2001) 0.00
    0.0016631988 = product of:
      0.014968789 = sum of:
        0.014968789 = weight(_text_:of in 6295) [ClassicSimilarity], result of:
          0.014968789 = score(doc=6295,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 6295, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6295)
      0.11111111 = coord(1/9)
    
    Abstract
    First book on the subject written for library technicians. Describes all aspects of cataloging and classification of library materials (book and nonbook), emphasizing copy cataloging but also discussing original cataloging
  6. Taylor, A.G.: ¬The organization of information (2003) 0.00
    0.0016631988 = product of:
      0.014968789 = sum of:
        0.014968789 = weight(_text_:of in 4596) [ClassicSimilarity], result of:
          0.014968789 = score(doc=4596,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 4596, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=4596)
      0.11111111 = coord(1/9)
    
    Abstract
    Provides a detailed and insightful discussion of such basic retrieval tools as bibliographies, catalogues, indexes, finding aids, registers, databases, major bibliographic utilities, and other organizing entities.
  7. Taylor, A.G.: Wynar's introduction to cataloging and classification (2004) 0.00
    0.0016631988 = product of:
      0.014968789 = sum of:
        0.014968789 = weight(_text_:of in 4601) [ClassicSimilarity], result of:
          0.014968789 = score(doc=4601,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 4601, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=4601)
      0.11111111 = coord(1/9)
    
    Footnote
    Offers practitioners and students of library and information science a complete, up-to-date, and practical guide to the world of cataloguing and classification.
  8. Ganendran, J.: Learn Library of Confress subject access (2000) 0.00
    0.0016464829 = product of:
      0.014818345 = sum of:
        0.014818345 = weight(_text_:of in 1368) [ClassicSimilarity], result of:
          0.014818345 = score(doc=1368,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 1368, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1368)
      0.11111111 = coord(1/9)
    
    Abstract
    This book provides the necessary skills for a cataloger in a library or other information agency, whether a professional or paraprofessional level. It is also suitable for university students studying librarianship and those independently learning subject cataloging. Reviews the various parts of the LCSH cataloging system and contains use practice exercises and tests. A glossary, bibliography and index complete this fourth study guide in the library basics series
    LCSH
    Subject headings, Library of Congress
    Subject
    Subject headings, Library of Congress
  9. Read, J.: Cataloguing without tears : managing knowledge in the information society (2003) 0.00
    0.0014876103 = product of:
      0.013388492 = sum of:
        0.013388492 = weight(_text_:of in 4509) [ClassicSimilarity], result of:
          0.013388492 = score(doc=4509,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.21854173 = fieldWeight in 4509, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=4509)
      0.11111111 = coord(1/9)
    
    Abstract
    It is a practical and authoritative guide to cataloguing for librarians, information scientists and information managers. It is intended to be used in conjunction with an internationally recognised standard to show how, firstly, cataloguing underpins all the other activities of an information service and, secondly, how to apply best practice in a variety of different situations.
    Content
    Key Features - Relates theory to practice and is written in an easy-to-read style - Includes guidance an subject cataloguing as well as descriptive cataloguing - Covers the use of ISBD and Dublin Core in descriptive cataloguing, rather than being tied exclusively to using AACR - Covers the principles of subject cataloguing, a topic which most non-librarians believe to be an integral part of cataloguing - Not only does the book describe the hows of cataloguing but goes a stage further by explaining why one might want to catalogue a particular item in a certain way The Author Jane Read has over 13 years' experience in academic libraries. She works as a cataloguing officer for The Higher Education Academy. Readership Librarians and informational professionals responsible for cataloguing materials (of any format). Knowledge managers will also find the book of interest. Contents Why bother to catalogue - what is a catalogue for, anticipating user needs, convincing your boss it is important What to catalogue -writing a cataloguing policy, what a catalogue record contains, the politics of cataloguing Who should catalogue - how long does it take to catalogue a book, skill sets needed, appropriate levels of staffing, organising time How to catalogue and not reinvent the wherl - choosing a records management system, international standards (AACR/MARC, ISBD, Dublin Core), subject cataloguing, and authority control Is it a book, is it a journal - distinguishing between formats, the'awkward squad', loose-leaf files, websites and skeletons What's a strange attractor? Cataloguing subjects you know nothing about -finding the right subject headings, verifying your information ki an ne lit pas le francais: unkriown languages and how to deal with them - what language is it, transcribing non-Roman alphabets, understanding the subject Special cases - rare books and archival collections, children's books, electronic media Resources for cataloguers - reference books, online discussion lists, conferences, bibliography
  10. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.00
    0.0014744176 = product of:
      0.013269759 = sum of:
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.026539518 = score(doc=5773,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Source
    Password. 2000, H.5, S.22-31
  11. Haller, K.; Popst, H.: Katalogisierung nach den RAK-WB : eine Einführung in die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (2003) 0.00
    0.0014744176 = product of:
      0.013269759 = sum of:
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 1811) [ClassicSimilarity], result of:
              0.026539518 = score(doc=1811,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 1811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1811)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    17. 6.2015 15:22:06
  12. McIlwaine, I.C.: ¬The Universal Decimal Classification : a guide to its use (2000) 0.00
    0.0014403724 = product of:
      0.012963352 = sum of:
        0.012963352 = weight(_text_:of in 161) [ClassicSimilarity], result of:
          0.012963352 = score(doc=161,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.21160212 = fieldWeight in 161, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
      0.11111111 = coord(1/9)
    
    Abstract
    This book is an extension and total revision of the author's earlier Guide to the use of UDC. The original was written in 1993 and in the intervening years much has happened with the classification. In particular, a much more rigorous approach has been undertaken in revision to ensure that the scheme is able to handle the requirements of a networked world. The book outlines the history and development of the Universal Decimal Classification, provides practical hints on its application and works through all the auxiliary and main tables highlighting aspects that need to be noted in applying the scheme. It also provides guidance on the use of the Master Reference File and discusses the ways in which the classification is used in the 21st century and its suitability as an aid to subject description in tagging metadata and consequently for application on the Internet. It is intended as a source for information about the scheme, for practical usage by classifiers in their daily work and as a guide to the student learning how to apply the classification. It is amply provided with examples to illustrate the many ways in which the scheme can be applied and will be a useful source for a wide range of information workers
  13. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.00
    0.0011795341 = product of:
      0.010615807 = sum of:
        0.010615807 = product of:
          0.021231614 = sum of:
            0.021231614 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.021231614 = score(doc=1767,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Date
    22. 6.2009 12:46:51
  14. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.00
    0.0011795341 = product of:
      0.010615807 = sum of:
        0.010615807 = product of:
          0.021231614 = sum of:
            0.021231614 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.021231614 = score(doc=3487,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Series
    Materialien zur Information und Dokumentation; Bd.22
  15. Mortimer, M.: Learn Dewey Decimal classification (Edition 21) (2000) 0.00
    0.0011760591 = product of:
      0.010584532 = sum of:
        0.010584532 = weight(_text_:of in 3144) [ClassicSimilarity], result of:
          0.010584532 = score(doc=3144,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17277241 = fieldWeight in 3144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3144)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in: Journal of documentation 50(1994) no.1, S.60-52 (M. Kinnell)
  16. Wynar, B.S.; Taylor, A.G.; Miller, D.P.: Introduction to cataloging and classification (2006) 0.00
    0.0011760591 = product of:
      0.010584532 = sum of:
        0.010584532 = weight(_text_:of in 2053) [ClassicSimilarity], result of:
          0.010584532 = score(doc=2053,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17277241 = fieldWeight in 2053, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2053)
      0.11111111 = coord(1/9)
    
    Abstract
    This revised edition of Wynar's Introduction to Cataloging and Classification (9th ed., 2000) completely incorporates revisions of AACR2, enhancements to MARC 21, and developments in areas such as resource description and access. Aside from the many revisions and updates and improved organization, the basic content remains the same. Beginning with an introduction to cataloging, cataloging rules, and MARC format, the book then turns to its largest section, "Description and Access." Authority control is explained, and the various methods of subject access are described in detail. Finally, administrative issues, including catalog management, are discussed. The glossary, source notes, suggested reading, and selected bibliography have been updated and expanded, as has the index. The examples throughout help to illustrate rules and concepts, and most MARC record examples are now shown in OCLC's Connexion format. This is an invaluable resource for cataloging students and beginning catalogers as well as a handy reference tool for more experienced catalogers.
    Content
    Rev. ed. of: Wynar's introduction to cataloging and classification. Rev. 9th ed. 2004.
  17. Hunter, E.J.: Classification - made simple (2002) 0.00
    0.0011642392 = product of:
      0.010478153 = sum of:
        0.010478153 = weight(_text_:of in 3390) [ClassicSimilarity], result of:
          0.010478153 = score(doc=3390,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17103596 = fieldWeight in 3390, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3390)
      0.11111111 = coord(1/9)
    
    Abstract
    This is an attempt to simplify the initial study of classification as used for information retrieval. The text adopts a gradual progression from very basic principles, one which should enable the reader to gain a firm grasp of one idea before proceeding to the next.
  18. Hunter, E.J.: Classification - made simple : an introduction to knowledge organisation and information retrieval (2009) 0.00
    0.0010518994 = product of:
      0.009467094 = sum of:
        0.009467094 = weight(_text_:of in 3394) [ClassicSimilarity], result of:
          0.009467094 = score(doc=3394,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.15453234 = fieldWeight in 3394, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3394)
      0.11111111 = coord(1/9)
    
    Abstract
    This established textbook introduces the essentials of classification as used for information processing. The third edition takes account of developments that have taken place since the second edition was published in 2002. "Classification Made Simple" provides a useful gateway to more advanced works and the study of specific schemes. As an introductory text, it will be invaluable to students of information work and to anyone inside or outside the information profession who needs to understand the manner in which classification can be utilized to facilitate and enhance organisation and retrieval.
    Footnote
    Rez. in: Mitt. VÖB 63(2010) H.1, S.143-147 (O. Oberhauser): " ... Kommen wir zur Kritik, die in den letzten Absätzen ansatzweise schon angeklungen ist. Das Anliegen des Buches ist, wie dem ersten Satz der Einleitung zu entnehmen ist, "to simplify the initial study of classification as used for knowledge organisation and information retrieval" (p. xi). Dies ist dem Autor in den ersten Kapiteln wohl auch gelungen. Die Einführung in die beiden Grundtypen - hier facettierte, dort hierarchische Systeme - ist verständlich und für Anfänger zweifellos gut geeignet. In den folgenden Kapiteln beginnt man sich aber zu fragen, wer eigentlich die Zielgruppe des Buches sein mag. Für Anfänger wird vieles zu schwierig sein, da gerade bei den anspruchsvolleren Aspekten der Text zu oberflächlich ist, keine didaktisch befriedigende Darstellung erfolgt und gelegentlich sogar Fachkenntnisse vorausgesetzt werden. Für Praktiker aus dem Bibliothekswesen fehlt vielfach der Bezug zur alltäglichen Realität, da z.B. Probleme der Buchaufstellung allenfalls am Rande zur Sprache kommen. Hochschullehrer, die eine Lehrveranstaltung zu Klassifikationsthemen vorbereiten müssen, werden manches an dem Buch nützlich finden, vielfach aber ob der mangelnden Detailliertheit zu anderen Unterlagen greifen. So bleibt der oder die "an Fragen der Klassifikation Interessierte" - ein undefiniertes und nicht allzu häufig anzutreffendes Wesen, das aber wahrscheinlich auch existiert und hier eine Fülle von Punkten angerissen findet, die zu einer weiteren Recherche in anderen Quellen animieren. Gut gelungen sind die zahlreichen Beispiele, selbst wenn dafür nicht immer sehr glückliche Notationssysteme gewählt wurden. Auch in sprachlicher Hinsicht ist - zumindest in den Anfangskapiteln - nichts zu bemängeln. Dass die beiden letzten Kapitel eher misslungen sind, wurde bereits oben angedeutet. In den übrigen Abschnitten merkt man dem Buch ebenfalls immer wieder an, dass es in seinen Grundzügen aus der Papier- und nicht aus der Online-Zeit stammt. Dennoch will ich nicht über Gebühr lamentieren, schon deshalb, da es gar nicht so viele brauchbare Lehrbücher zu Klassifikationsthemen gibt. Und in diese letztere Kategorie gehört Hunters Text alldieweil."
  19. Vonhoegen, H.: Einstieg in XML (2002) 0.00
    0.0010320923 = product of:
      0.009288831 = sum of:
        0.009288831 = product of:
          0.018577661 = sum of:
            0.018577661 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.018577661 = score(doc=4002,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  20. Booth, P.F.: Indexing : the manual of good practice (2001) 0.00
    7.8010943E-4 = product of:
      0.007020985 = sum of:
        0.007020985 = weight(_text_:of in 1968) [ClassicSimilarity], result of:
          0.007020985 = score(doc=1968,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.11460425 = fieldWeight in 1968, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=1968)
      0.11111111 = coord(1/9)
    
    Abstract
    Indexing is an activity which often goes unnoticed and can be taken for granted by reades. Indexing.- The Manual of Good Practice covers all aspects of whole document indexing of books, serial publications, images and sound materials. The book gives the purpose and principles of indexing, and covers areas such as managing the work, technology and other subject specialisms. The Manual takes the reader through the basic principles of indexing an to expert approaches, and therefore has a broad appeal for both indexers and prospective indexers whether they work freelance or in-house.
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
    Der Zugang zum Informationsspeicher ist auch von verwandten Begriffen her zu gewährleisten, denn der Suchende lässt sich gern mit seiner Fragestellung zu allgemeineren und vor allem zu spezifischeren Begriffen leiten. Verweisungen der Art "siehe auch" dienen diesem Zweck. Der Zugang ist auch von unterschiedlichen, aber bedeutungsgleichen Ausdrücken mithilfe einer Verweisung von der Art "siehe" zu gewährleisten, denn ein Fragesteller könnte sich mit einem von diesen Synonymen auf die Suche begeben haben und würde dann nicht fündig werden. Auch wird Vieles, wofür ein Suchender sein Schlagwort parat hat, in einem Text nur in wortreicher Umschreibung und paraphrasiert angetroffen ("Terms that may not appear in the text but are likely to be sought by index users"), d.h. praktisch unauffindbar in einer derartig mannigfaltigen Ausdrucksweise. All dies sollte lexikalisch ausgedrückt werden, und zwar in geläufiger Terminologie, denn in dieser Form erfolgt auch die Fragestellung. Hier wird die Grenze zwischen "concept indexing" gegenüber dem bloßen "word indexing" gezogen, welch letzteres sich mit der Präsentation von nicht interpretierten Textwörtern begnügt. Nicht nur ist eine solche Grenze weit verbreitet unbekannt, ihre Existenz wird zuweilen sogar bestritten, obwohl doch ein Wort meistens viele Begriffe ausdrückt und obwohl ein Begriff meistens durch viele verschiedene Wörter und Sätze ausgedrückt wird. Ein Autor kann und muss sich in seinen Texten oft mit Andeutungen begnügen, weil ein Leser oder Zuhörer das Gemeinte schon aus dem Zusammenhang erkennen kann und nicht mit übergroßer Deutlichkeit (spoon feeding) belästigt sein will, was als Unterstellung von Unkenntnis empfunden würde. Für das Retrieval hingegen muss das Gemeinte explizit ausgedrückt werden. In diesem Buch wird deutlich gemacht, was alles an außertextlichem und Hintergrund-Wissen für ein gutes Indexierungsergebnis aufgeboten werden muss, dies auf der Grundlage von sachverständiger und sorgfältiger Interpretation ("The indexer must understand the meaning of a text"). All dies lässt gutes Indexieren nicht nur als professionelle Dienstleistung erscheinen, sondern auch als Kunst. Als Grundlage für all diese Schritte wird ein Thesaurus empfohlen, mit einem gut strukturierten Netzwerk von verwandtschaftlichen Beziehungen und angepasst an den jeweiligen Buchtext. Aber nur selten wird man auf bereits andernorts vorhandene Thesauri zurückgreifen können. Hier wäre ein Hinweis auf einschlägige Literatur zur Thesaurus-Konstruktion nützlich gewesen.

Languages

  • e 31
  • d 11

Types

  • m 39
  • a 1
  • el 1
  • s 1
  • x 1
  • More… Less…

Subjects