Search (54 results, page 1 of 3)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Lancaster, F.W.: Indexing and abstracting in theory and practice (2003) 0.04
    0.03846892 = product of:
      0.07693784 = sum of:
        0.059274152 = weight(_text_:services in 4913) [ClassicSimilarity], result of:
          0.059274152 = score(doc=4913,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.344191 = fieldWeight in 4913, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=4913)
        0.017663691 = product of:
          0.035327382 = sum of:
            0.035327382 = weight(_text_:management in 4913) [ClassicSimilarity], result of:
              0.035327382 = score(doc=4913,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.22344214 = fieldWeight in 4913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4913)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Covers: indexing principles and practice; precoordinate indexes; consistency and quality of indexing; types and functions of abstracts; writing an abstract; evaluation theory and practice; approaches used in indexing and abstracting services; indexing enhancement; natural language in information retrieval; indexing and abstracting of imaginative works; databases of images and sound; automatic indexing and abstracting; the future of indexing and abstracting services
    Footnote
    Rez. in: JASIST 57(2006) no.1, S.144-145 (H. Saggion): "... This volume is a very valuable source of information for not only students and professionals in library and information science but also for individuals and institutions involved in knowledge management and organization activities. Because of its broad coverage of the information science topic, teachers will find the contents of this book useful for courses in the areas of information technology, digital as well as traditional libraries, and information science in general."
  2. Rowley, J.E.; Farrow, J.: Organizing knowledge : an introduction to managing access to information (2000) 0.03
    0.030211486 = product of:
      0.06042297 = sum of:
        0.03492763 = weight(_text_:services in 2463) [ClassicSimilarity], result of:
          0.03492763 = score(doc=2463,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2028165 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
        0.025495343 = product of:
          0.050990686 = sum of:
            0.050990686 = weight(_text_:management in 2463) [ClassicSimilarity], result of:
              0.050990686 = score(doc=2463,freq=6.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.32251096 = fieldWeight in 2463, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2463)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    For its third edition this standard text on knowledge organization and retrieval has been extensively revised and restructured to accommodate the increased significance of electronic information resources. With the help of many new sections on topics such as information retrieval via the Web, metadata and managing information retrieval systems, the book explains principles relating to hybrid print-based and electronic, networked environments experienced by today's users. Part I, Information Basics, explores the nature of information and knowledge and their incorporation into documents. Part II, Records, focuses specifically on electronic databases for accessing print or electronic media. Part III, Access, explores the range of tools for accessing information resources and covers interfaces, indexing and searching languages, classification, thesauri and catalogue and bibliographic access points. Finally, Part IV, Systems, describes the contexts through which knowledge can be organized and retrieved, including OPACs, the Internet, CD-ROMs, online search services and printed indexes and documents. This book is a comprehensive and accessible introduction to knowledge organization for both undergraduate and postgraduate students of information management and information systems
    LCSH
    Information storage and retrieval systems / Management
    Subject
    Information storage and retrieval systems / Management
  3. Lancaster, F.W.: Vocabulary control for information retrieval (1986) 0.03
    0.029363988 = product of:
      0.11745595 = sum of:
        0.11745595 = sum of:
          0.06661395 = weight(_text_:management in 217) [ClassicSimilarity], result of:
            0.06661395 = score(doc=217,freq=4.0), product of:
              0.15810528 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046906993 = queryNorm
              0.42132655 = fieldWeight in 217, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0625 = fieldNorm(doc=217)
          0.050842002 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
            0.050842002 = score(doc=217,freq=2.0), product of:
              0.1642603 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046906993 = queryNorm
              0.30952093 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=217)
      0.25 = coord(1/4)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    Date
    22. 4.2007 10:07:51
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
  4. Wynar, B.S.; Taylor, A.G.; Miller, D.P.: Introduction to cataloging and classification (2006) 0.02
    0.024823686 = product of:
      0.049647372 = sum of:
        0.03492763 = weight(_text_:services in 2053) [ClassicSimilarity], result of:
          0.03492763 = score(doc=2053,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2028165 = fieldWeight in 2053, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2053)
        0.014719742 = product of:
          0.029439485 = sum of:
            0.029439485 = weight(_text_:management in 2053) [ClassicSimilarity], result of:
              0.029439485 = score(doc=2053,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.18620178 = fieldWeight in 2053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2053)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This revised edition of Wynar's Introduction to Cataloging and Classification (9th ed., 2000) completely incorporates revisions of AACR2, enhancements to MARC 21, and developments in areas such as resource description and access. Aside from the many revisions and updates and improved organization, the basic content remains the same. Beginning with an introduction to cataloging, cataloging rules, and MARC format, the book then turns to its largest section, "Description and Access." Authority control is explained, and the various methods of subject access are described in detail. Finally, administrative issues, including catalog management, are discussed. The glossary, source notes, suggested reading, and selected bibliography have been updated and expanded, as has the index. The examples throughout help to illustrate rules and concepts, and most MARC record examples are now shown in OCLC's Connexion format. This is an invaluable resource for cataloging students and beginning catalogers as well as a handy reference tool for more experienced catalogers.
    Footnote
    Rez. in: Reference and user services quarterly 46(2007) no.3, S.104-105 (C.N. Conway); Technicalities 27(2007) no.2, S.19-20 (S.S. Intner)
  5. Taylor, A.G.: Wynar's introduction to cataloging and classification (1992) 0.02
    0.024449343 = product of:
      0.09779737 = sum of:
        0.09779737 = weight(_text_:services in 6014) [ClassicSimilarity], result of:
          0.09779737 = score(doc=6014,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.56788623 = fieldWeight in 6014, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.109375 = fieldNorm(doc=6014)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Library resources and technical services 37(1993) no.1, S.107-108 (A.R. Thomas)
  6. Saye, J.D.: Mannheimer's cataloging and classification : a workbook (1991) 0.02
    0.024449343 = product of:
      0.09779737 = sum of:
        0.09779737 = weight(_text_:services in 3839) [ClassicSimilarity], result of:
          0.09779737 = score(doc=3839,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.56788623 = fieldWeight in 3839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.109375 = fieldNorm(doc=3839)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Library resources and technical services 36(1992) no.2, S.250 (L. Osmus)
  7. Vonhoegen, H.: Einstieg in XML (2002) 0.02
    0.022849137 = product of:
      0.045698274 = sum of:
        0.034576587 = weight(_text_:services in 4002) [ClassicSimilarity], result of:
          0.034576587 = score(doc=4002,freq=4.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.20077808 = fieldWeight in 4002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4002)
        0.0111216875 = product of:
          0.022243375 = sum of:
            0.022243375 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.022243375 = score(doc=4002,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  8. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.02
    0.020931244 = product of:
      0.041862488 = sum of:
        0.034576587 = weight(_text_:services in 6119) [ClassicSimilarity], result of:
          0.034576587 = score(doc=6119,freq=16.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.20077808 = fieldWeight in 6119, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.007285901 = product of:
          0.014571802 = sum of:
            0.014571802 = weight(_text_:management in 6119) [ClassicSimilarity], result of:
              0.014571802 = score(doc=6119,freq=4.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.09216518 = fieldWeight in 6119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    This book covers all of the primary areas in the DL Curriculum as suggested by T. Saracevic and M. Dalbello's (2001) and A. Spink and C. Cool's (1999) D-Lib articles an DL education. In fact, the book's coverage is quite broad; it includes a Superset of recommended topics, offering a chapter an professional issues (recommended in Spink and Cool) as well as three chapters devoted to DL research. The book comes with a comprehensive list of references and an index, allowing readers to easily locate a specific topic or research project of interest. Each chapter also begins with a short outline of the chapter. As an additional plus, the book is quite heavily Cross-referenced, allowing easy navigation across topics. The only drawback with regard to supplementary materials is that it Lacks a glossary that world be a helpful reference to students needing a reference guide to DL terminology. The book's organization is well thought out and each chapter stands independently of the others, facilitating instruction by parts. While not officially delineated into three parts, the book's fifteen chapters are logically organized as such. Chapters 2 and 3 form the first part, which surveys various DLs and DL research initiatives. The second and core part of the book examines the workings of a DL along various dimensions, from its design to its eventual implementation and deployment. The third part brings together extended topics that relate to a deployed DL: its preservation, evaluation, and relationship to the larger social content. Chapter 1 defines digital libraries and discusses the scope of the materials covered in the book. The authors posit that the meaning of digital library is best explained by its sample characteristics rather than by definition, noting that it has largely been shaped by the melding of the research and information professions. This reveals two primary facets of the DL: an "emphasis an digital content" coming from an engineering and computer science perspective as well as an "emphasis an services" coming from library and information professionals (pp. 4-5). The book's organization mirrors this dichotomy, focusing an the core aspects of content in the earlier chapters and retuming to the service perspective in later chapters.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  9. Hedden, H.: ¬The accidental taxonomist (2012) 0.02
    0.019858949 = product of:
      0.039717898 = sum of:
        0.027942104 = weight(_text_:services in 2915) [ClassicSimilarity], result of:
          0.027942104 = score(doc=2915,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.1622532 = fieldWeight in 2915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.03125 = fieldNorm(doc=2915)
        0.011775794 = product of:
          0.023551589 = sum of:
            0.023551589 = weight(_text_:management in 2915) [ClassicSimilarity], result of:
              0.023551589 = score(doc=2915,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.14896142 = fieldWeight in 2915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2915)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    "Clearly details the conceptual and practical notions of controlled vocabularies. . provides a crash course for newcomers and offers experienced practitioners a common frame of reference. A valuable book." - Christine Connors, TriviumRLG LLC The Accidental Taxonomist is the most comprehensive guide available to the art and science of building information taxonomies. Heather Hedden-one of today's leading writers, instructors, and consultants on indexing and taxonomy topics-walks readers through the process, displaying her trademark ability to present highly technical information in straightforward, comprehensible English. Drawing on numerous real-world examples, Hedden explains how to create terms and relationships, select taxonomy management software, design taxonomies for human versus automated indexing, manage enterprise taxonomy projects, and adapt taxonomies to various user interfaces. The result is a practical and essential guide for information professionals who need to effectively create or manage taxonomies, controlled vocabularies, and thesauri. "A wealth of descriptive reference content is balanced with expert guidance. . Open The Accidental Taxonomist to begin the learning process or to refresh your understanding of the depth and breadth of this demanding discipline." - Lynda Moulton, Principal Consultant, LWM Technology Services "From the novice taxonomist to the experienced professional, all will find helpful, practical advice in The Accidental Taxonomist." - Trish Yancey, TCOO, Synaptica, LLC "This book squarely addresses the growing demand for and interest in taxonomy. ...Hedden brings a variety of background experience, including not only taxonomy construction but also abstracting and content categorization and creating back-of-the-book indexes. These experiences serve her well by building a broad perspective on the similarities as well as real differences between often overlapping types of work." - Marjorie M. K. Hlava, President and Chairman, Access Innovations, Inc., and Chair, SLA Taxonomy Division
  10. Downing, M.H.; Downing, D.H.: Introduction to cataloging and classification : with 45 exhibits and 15 figures (1992) 0.02
    0.017463814 = product of:
      0.06985526 = sum of:
        0.06985526 = weight(_text_:services in 6169) [ClassicSimilarity], result of:
          0.06985526 = score(doc=6169,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.405633 = fieldWeight in 6169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.078125 = fieldNorm(doc=6169)
      0.25 = coord(1/4)
    
    Footnote
    Die 5. Aufl. dieses Titels erschien 1981. - Rez. in: Journal of academic librarianship 18(1993) no.6, S.371 (J.R. Luttrell); Library resources and technical services 37(1993) S.448-449 (T.H. Connell)
  11. Rowley, J.E.: Organizing knowledge : an introduction to information retrieval (1992) 0.02
    0.017463814 = product of:
      0.06985526 = sum of:
        0.06985526 = weight(_text_:services in 823) [ClassicSimilarity], result of:
          0.06985526 = score(doc=823,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.405633 = fieldWeight in 823, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.078125 = fieldNorm(doc=823)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Journal of academic librarianship 18(1993) no.6, S.389; Library resources and technical services 37(1993) S.451-453 (E. Crosby); Library review 42(1993) no.5, S.74-75 (D. Anderson)
  12. Ferguson, B.: Subject analysis (1998) 0.01
    0.013971052 = product of:
      0.05588421 = sum of:
        0.05588421 = weight(_text_:services in 642) [ClassicSimilarity], result of:
          0.05588421 = score(doc=642,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.3245064 = fieldWeight in 642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0625 = fieldNorm(doc=642)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Library collections, acquisitions and technical services 24(2000) S.519-520 (A. Cohen)
  13. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.01
    0.012411843 = product of:
      0.024823686 = sum of:
        0.017463814 = weight(_text_:services in 468) [ClassicSimilarity], result of:
          0.017463814 = score(doc=468,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.10140825 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.007359871 = product of:
          0.014719742 = sum of:
            0.014719742 = weight(_text_:management in 468) [ClassicSimilarity], result of:
              0.014719742 = score(doc=468,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.09310089 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
  14. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.01
    0.012243148 = product of:
      0.04897259 = sum of:
        0.04897259 = sum of:
          0.023551589 = weight(_text_:management in 1767) [ClassicSimilarity], result of:
            0.023551589 = score(doc=1767,freq=2.0), product of:
              0.15810528 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046906993 = queryNorm
              0.14896142 = fieldWeight in 1767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
          0.025421001 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
            0.025421001 = score(doc=1767,freq=2.0), product of:
              0.1642603 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046906993 = queryNorm
              0.15476047 = fieldWeight in 1767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
      0.25 = coord(1/4)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
  15. Bates, M.J.: Where should the person stop and the information search interface start? (1990) 0.01
    0.011775794 = product of:
      0.047103178 = sum of:
        0.047103178 = product of:
          0.094206356 = sum of:
            0.094206356 = weight(_text_:management in 155) [ClassicSimilarity], result of:
              0.094206356 = score(doc=155,freq=2.0), product of:
                0.15810528 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046906993 = queryNorm
                0.5958457 = fieldWeight in 155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.125 = fieldNorm(doc=155)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 26(1990), S.575-591
  16. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.01
    0.0111216875 = product of:
      0.04448675 = sum of:
        0.04448675 = product of:
          0.0889735 = sum of:
            0.0889735 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
              0.0889735 = score(doc=3247,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.5416616 = fieldWeight in 3247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3247)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  17. Lancaster, F.W.: Indexing and abstracting in theory and practice (1998) 0.01
    0.010478289 = product of:
      0.041913155 = sum of:
        0.041913155 = weight(_text_:services in 4141) [ClassicSimilarity], result of:
          0.041913155 = score(doc=4141,freq=2.0), product of:
            0.17221296 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046906993 = queryNorm
            0.2433798 = fieldWeight in 4141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=4141)
      0.25 = coord(1/4)
    
    Content
    Enthält die Kapitel: Indexing principles, Indexing practice, Precoordinate indexes, Consistency of indexing, Quality of indexing, Abstracts: types and functions, Writing the Abstract, Evaluation aspects, Approaches used in indexing and abstracting services, Enhancing the indexing, On the indexing and abstracting of imaginative works, Indexing multimedia sources, Texte searching, Automatic indexing, automatic abstracting, and related procedures, Indexing and the Internet, The future of indexing and abstracting, exercises in indexing and abstracting
  18. Kaiser, U.: Handbuch Internet und Online Dienste : der kompetente Reiseführer für das digitale Netz (1996) 0.01
    0.009532874 = product of:
      0.038131498 = sum of:
        0.038131498 = product of:
          0.076262996 = sum of:
            0.076262996 = weight(_text_:22 in 4589) [ClassicSimilarity], result of:
              0.076262996 = score(doc=4589,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.46428138 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Series
    Heyne Business; 22/1019
  19. Kumar, K.: Theory of classification (1989) 0.01
    0.009532874 = product of:
      0.038131498 = sum of:
        0.038131498 = product of:
          0.076262996 = sum of:
            0.076262996 = weight(_text_:22 in 6774) [ClassicSimilarity], result of:
              0.076262996 = score(doc=6774,freq=2.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.46428138 = fieldWeight in 6774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    25. 3.2019 18:15:22
  20. Langridge, D.W.: Classification: its kinds, systems, elements and application (1992) 0.01
    0.008987681 = product of:
      0.035950724 = sum of:
        0.035950724 = product of:
          0.07190145 = sum of:
            0.07190145 = weight(_text_:22 in 770) [ClassicSimilarity], result of:
              0.07190145 = score(doc=770,freq=4.0), product of:
                0.1642603 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046906993 = queryNorm
                0.4377287 = fieldWeight in 770, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=770)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26. 7.2002 14:01:22
    Footnote
    Rez. in: Journal of documentation 49(1993) no.1, S.68-70. (A. Maltby); Journal of librarianship and information science 1993, S.108-109 (A.G. Curwen); Herald of library science 33(1994) nos.1/2, S.85 (P.N. Kaula); Knowledge organization 22(1995) no.1, S.45 (M.P. Satija)

Years

Languages

  • e 39
  • d 15

Types

  • m 50
  • a 3
  • s 2
  • el 1
  • More… Less…

Subjects

Classifications