Search (213 results, page 11 of 11)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Henzler, R.G.: Information und Dokumentation : Sammeln, Speichern und Wiedergewinnen von Fachinformation in Datenbanken (1992) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 4839) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=4839,freq=2.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 4839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4839)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    X, 322 S
  2. Bertram, J.: Einführung in die inhaltliche Erschließung : Grundlagen - Methoden - Instrumente (2005) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 210) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=210,freq=8.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 210, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=210)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 56(2005) H.7, S.395-396 (M. Ockenfeld): "... Das Buch ist der z. Band in einer vom International Network for Terminology (TermNet), einer Gruppe von vierzehn Fachleuten der Terminologiearbeit aus acht Ländern, herausgegebenen Reihe. Ein Anliegen der Autorin, das sie in ihrem Vorwort formuliert, ist es denn auch "Licht in das terminologische Dunkel zu bringen", in das man beim intensiven Literaturstudium allzu leicht gerät, weil der faktische Sprachgebrauch häufig vom genormten abweicht und außerdem Bibliothekare, Dokumentare und Informationswissenschaftler Ausdrücke unterschiedlich gebrauchen. ... Der didaktisch gut aufbereitete Stoff wird sehr verständlich, präzise und mit spürbarer Begeisterung beschrieben. Doch das Buch ist auch wegen seiner sorgfältigen typographischen Gestaltung ein Lesevergnügen, vor allem für diejenigen, die die herkömmliche Rechtschreibung gewohnt sind. Es kann der angestrebten Zielgruppe, Teilnehmer und Lehrende von Hochschulstudiengängen im Bibliotheks-, Informations- und Dokumentationsbereich, als kompaktes Lehr- und Arbeitsbuch für die Grundlagen der Inhaltserschließung nachdrücklich empfohlen werden."
    Weitere Rez. in: Mitt VÖB 59(2006) H.1, S.63-66 (O. Oberhauser); BuB 58(2006) H.4, S.344-345 (H. Wiesenmüller): " ... Um die Sacherschließung ist es dagegen merkwürdig still geworden. Vielerorts wird sie - so scheint es zumindest der Rezensentin - inzwischen primär als Kostenfaktor wahrgenommen. Neue, als wichtiger empfundene Aufgaben (namentlich die Vermittlung von Informationskompetenz) lassen sie mehr und mehr in den Hintergrund treten. Auch der Entschluss Der Deutschen Bibliothek, verstärkt auf klassifikatorische Erschließung mit Dewey's Decimal Classification (DDC) zu setzen, hat keine breite Grundsatzdiskussion angestoßen; die Neuerung wird bisher nahezu ausschließlich unter dem Aspekt wegfallender Fremddaten bei den Schlagwortketten betrachtet. Unter Bibliothekaren ist das Interesse an der Sacherschließung derzeit also eher gering ... Es wäre gut, wenn künftig wieder mehr über Sacherschließung nachgedacht und geredet würde. Das nötige Grundlagenwissen für eine qualifizierte Diskussion kann der hier vorgestellte Band vermitteln. Er sei deshalb sowohl denen ans Herz gelegt, die sich für inhaltliche Erschließung interessieren, als auch jenen, die dies bisher (noch) nicht tun."
    Pages
    315 S
  3. Gaus, W.: Dokumentations- und Ordnungslehre : Theorie und Praxis des Information Retrieval (2005) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 679) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=679,freq=2.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 679, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=679)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XVI, 480 S
  4. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 3346) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=3346,freq=2.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 3346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XXVII, 356 S. + 1 CD-ROM
  5. Batley, S.: Classification in theory and practice (2005) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 1170) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=1170,freq=8.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 1170, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1170)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 31(2005), no.4, S.257-258 (B.H. Kwasnik): "According to the author, there have been many books that address the general topic of cataloging and indexing, but relatively few that focus solely an classification. This Compact and clearly written book promises to "redress the balance," and it does. From the outset the author identifies this as a textbook - one that provides theoretical underpinnings, but has as its main goal the provision of "practical advice and the promotion of practical skills" (p. vii). This is a book for the student, or for the practitioner who would like to learn about other applied bibliographic classification systems, and it considers classification as a pragmatic solution to a pragmatic problem: that of organizing materials in a collection. It is not aimed at classification researchers who study the nature of classification per se, nor at those whose primary interest is in classification as a manifestation of human cultural, social, and political values. Having said that, the author's systematic descriptions provide an exceptionally lucid and conceptually grounded description of the prevalent bibliographic classification schemes as they exist, and thus, the book Could serve as a baseline for further comparative analyses or discussions by anyone pursuing such investigations. What makes this book so appealing, even to someone who has immersed herself in this area for many years, as a practicing librarian, a teacher, and a researcher? I especially liked the conceptual framework that supported the detailed descriptions. The author defines and provides examples of the fundamental concepts of notation and the types of classifications, and then develops the notions of conveying order, brevity and simplicity, being memorable, expressiveness, flexibility and hospitality. These basic terms are then used throughout to analyze and comment an the classifications described in the various chapters: DDC, LCC, UDC, and some well-chosen examples of facetted schemes (Colon, Bliss, London Classification of Business Studies, and a hypothetical library of photographs).
    Weitere Rez. in: Mitt. VÖB 59(2006) H.1, S.58-60 (O. Oberhauser).
    Pages
    XI, 181 S
  6. Computerlinguistik und Sprachtechnologie : Eine Einführung (2010) 0.00
    4.925171E-4 = product of:
      9.850342E-4 = sum of:
        9.850342E-4 = product of:
          0.0019700683 = sum of:
            0.0019700683 = weight(_text_:s in 1735) [ClassicSimilarity], result of:
              0.0019700683 = score(doc=1735,freq=2.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.048049565 = fieldWeight in 1735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XVI, 736 S
  7. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.00
    4.3532767E-4 = product of:
      8.7065535E-4 = sum of:
        8.7065535E-4 = product of:
          0.0017413107 = sum of:
            0.0017413107 = weight(_text_:s in 2050) [ClassicSimilarity], result of:
              0.0017413107 = score(doc=2050,freq=4.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.042470217 = fieldWeight in 2050, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    Pages
    169 S
  8. Theory of subject analysis : A sourcebook (1985) 0.00
    4.3532767E-4 = product of:
      8.7065535E-4 = sum of:
        8.7065535E-4 = product of:
          0.0017413107 = sum of:
            0.0017413107 = weight(_text_:s in 3622) [ClassicSimilarity], result of:
              0.0017413107 = score(doc=3622,freq=4.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.042470217 = fieldWeight in 3622, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3622)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XV,415 S
    Type
    s
  9. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    4.3532767E-4 = product of:
      8.7065535E-4 = sum of:
        8.7065535E-4 = product of:
          0.0017413107 = sum of:
            0.0017413107 = weight(_text_:s in 468) [ClassicSimilarity], result of:
              0.0017413107 = score(doc=468,freq=4.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.042470217 = fieldWeight in 468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    Pages
    236 S
  10. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    4.309524E-4 = product of:
      8.619048E-4 = sum of:
        8.619048E-4 = product of:
          0.0017238096 = sum of:
            0.0017238096 = weight(_text_:s in 6119) [ClassicSimilarity], result of:
              0.0017238096 = score(doc=6119,freq=8.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.04204337 = fieldWeight in 6119, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.178-179 (M.-Y. Kan): "In their latest book, Chowdhury and Chowdhury have written an introductory text an digital libraries, primarily targeting "students researching digital libraries as part of information and library science, as well as computer science, courses" (p. xiv). It is an ambitious work that surveys many of the broad topics in digital libraries (DL) while highlighting completed and ongoing DL research in many parts of the world. With the revamping of Library and Information Science (LIS) Curriculums to focus an information technology, many LIS schools are now teaching DL topics either as an independent course or as part of an existing one. Instructors of these courses have in many cases used supplementary texts and compeed readers from journals and conference materials, possibly because they feel that a suitable textbook has yet to be written. A solid, principal textbook for digital libraries is sorely needed to provide a critical, evaluative synthesis of DL literature. It is with this in mind that I believe Introduction to Digital Libraries was written. An introductory text an any Cross-disciplinary topic is bound to have conflicting limitations and expectations from its adherents who come from different backgrounds. This is the rase in the development of DL Curriculum, in which both LIS and computer science schools are actively involved. Compiling a useful secondary source in such Cross-disciplinary areas is challenging; it requires that jargon from each contributing field be carefully explained and respected, while providing thought-provoking material to broaden student perspectives. In my view, the book's breadth certainly encompasses the whole of what an introduction to DL needs, but it is hampered by a lack of focus from catering to such disparate needs. For example, LIS students will need to know which key aspects differentiate digital library metadata from traditional metadata while computer science students will need to learn the basics of vector spare and probabilistic information retrieval. However, the text does not give enough detail an either subject and thus even introductory students will need to go beyond the book and consult primary sources. In this respect, the book's 307 pages of content are too short to do justice to such a broad field of study.
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Pages
    359 S
  11. Chu, H.: Information representation and retrieval in the digital age (2010) 0.00
    4.2653232E-4 = product of:
      8.5306464E-4 = sum of:
        8.5306464E-4 = product of:
          0.0017061293 = sum of:
            0.0017061293 = weight(_text_:s in 92) [ClassicSimilarity], result of:
              0.0017061293 = score(doc=92,freq=6.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.04161215 = fieldWeight in 92, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Weitere Rez. in: Rez. in: nfd 55(2004) H.4, S.252 (D. Lewandowski):"Die Zahl der Bücher zum Thema Information Retrieval ist nicht gering, auch in deutscher Sprache liegen einige Titel vor. Trotzdem soll ein neues (englischsprachiges) Buch zu diesem Thema hier besprochen werden. Dieses zeichnet sich durch eine Kürze (nur etwa 230 Seiten Text) und seine gute Verständlichkeit aus und richtet sich damit bevorzugt an Studenten in den ersten Semestern. Heting Chu unterrichtet seit 1994 an Palmer School of Library and Information Science der Long Island University New York. Dass die Autorin viel Erfahrung in der Vermittlung des Stoffs in ihren Information-Retrieval-Veranstaltungen sammeln konnte, merkt man dem Buch deutlich an. Es ist einer klaren und verständlichen Sprache geschrieben und führt in die Grundlagen der Wissensrepräsentation und des Information Retrieval ein. Das Lehrbuch behandelt diese Themen als Gesamtkomplex und geht damit über den Themenbereich ähnlicher Bücher hinaus, die sich in der Regel auf das Retrieval beschränken. Das Buch ist in zwölf Kapitel gegliedert, wobei das erste Kapitel eine Übersicht über die zu behandelnden Themen gibt und den Leser auf einfache Weise in die Grundbegriffe und die Geschichte des IRR einführt. Neben einer kurzen chronologischen Darstellung der Entwicklung der IRR-Systeme werden auch vier Pioniere des Gebiets gewürdigt: Mortimer Taube, Hans Peter Luhn, Calvin N. Mooers und Gerard Salton. Dies verleiht dem von Studenten doch manchmal als trocken empfundenen Stoff eine menschliche Dimension. Das zweite und dritte Kapitel widmen sich der Wissensrepräsentation, wobei zuerst die grundlegenden Ansätze wie Indexierung, Klassifikation und Abstracting besprochen werden. Darauf folgt die Behandlung von Wissensrepräsentation mittels Metadaten, wobei v.a. neuere Ansätze wie Dublin Core und RDF behandelt werden. Weitere Unterkapitel widmen sich der Repräsentation von Volltexten und von Multimedia-Informationen. Die Stellung der Sprache im IRR wird in einem eigenen Kapitel behandelt. Dabei werden in knapper Form verschiedene Formen des kontrollierten Vokabulars und die wesentlichen Unterscheidungsmerkmale zur natürlichen Sprache erläutert. Die Eignung der beiden Repräsentationsmöglichkeiten für unterschiedliche IRR-Zwecke wird unter verschiedenen Aspekten diskutiert.
    Pages
    XIV, 248 S
  12. Broughton, V.: Essential classification (2004) 0.00
    4.2653232E-4 = product of:
      8.5306464E-4 = sum of:
        8.5306464E-4 = product of:
          0.0017061293 = sum of:
            0.0017061293 = weight(_text_:s in 2824) [ClassicSimilarity], result of:
              0.0017061293 = score(doc=2824,freq=6.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.04161215 = fieldWeight in 2824, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    Weitere Rez. in: ZfBB 53(2006) H.2, S.111-113 (W. Gödert)
    Pages
    324 S
  13. Booth, P.F.: Indexing : the manual of good practice (2001) 0.00
    3.4826217E-4 = product of:
      6.9652434E-4 = sum of:
        6.9652434E-4 = product of:
          0.0013930487 = sum of:
            0.0013930487 = weight(_text_:s in 1968) [ClassicSimilarity], result of:
              0.0013930487 = score(doc=1968,freq=4.0), product of:
                0.04100075 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03771094 = queryNorm
                0.033976175 = fieldWeight in 1968, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1968)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
    Pages
    XIV,489 S

Languages

  • e 124
  • d 88
  • f 1
  • More… Less…

Types

  • m 180
  • a 21
  • s 14
  • el 4
  • ? 1
  • h 1
  • x 1
  • More… Less…

Subjects

Classifications