Search (25 results, page 1 of 2)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  • × year_i:[2000 TO 2010}
  1. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.14
    0.13851598 = product of:
      0.18468797 = sum of:
        0.06581937 = weight(_text_:web in 3346) [ClassicSimilarity], result of:
          0.06581937 = score(doc=3346,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 3346, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.07465562 = weight(_text_:search in 3346) [ClassicSimilarity], result of:
          0.07465562 = score(doc=3346,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 3346, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.044212975 = product of:
          0.08842595 = sum of:
            0.08842595 = weight(_text_:engine in 3346) [ClassicSimilarity], result of:
              0.08842595 = score(doc=3346,freq=4.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.3343436 = fieldWeight in 3346, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
    LCSH
    Search engines / Programming
    World Wide Web / Computer programs
    Web search engines
    RSWK
    Suchmaschine / World Wide Web / Information Retrieval
    Subject
    Suchmaschine / World Wide Web / Information Retrieval
    Search engines / Programming
    World Wide Web / Computer programs
    Web search engines
  2. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.05
    0.052482706 = product of:
      0.10496541 = sum of:
        0.08846869 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.08846869 = score(doc=468,freq=74.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.548316 = fieldWeight in 468, product of:
              8.602325 = tf(freq=74.0), with freq of:
                74.0 = termFreq=74.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.016496718 = weight(_text_:search in 468) [ClassicSimilarity], result of:
          0.016496718 = score(doc=468,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.09600292 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.5 = coord(2/4)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    LCSH
    Semantic Web
    Subject
    Semantic Web
    Theme
    Semantic Web
  3. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.05
    0.050187834 = product of:
      0.10037567 = sum of:
        0.05996712 = weight(_text_:web in 2050) [ClassicSimilarity], result of:
          0.05996712 = score(doc=2050,freq=34.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37166741 = fieldWeight in 2050, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.040408544 = weight(_text_:search in 2050) [ClassicSimilarity], result of:
          0.040408544 = score(doc=2050,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23515818 = fieldWeight in 2050, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  4. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2001) 0.04
    0.037249055 = product of:
      0.07449811 = sum of:
        0.03490599 = weight(_text_:web in 1655) [ClassicSimilarity], result of:
          0.03490599 = score(doc=1655,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 1655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1655)
        0.03959212 = weight(_text_:search in 1655) [ClassicSimilarity], result of:
          0.03959212 = score(doc=1655,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 1655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1655)
      0.5 = coord(2/4)
    
    Content
    Teil 1: Grundlagen des Information Retrieval: Schwerpunkte des Information Retrieval mit Relevanz für die praktische Recherchedurchführung: Arbeitsschritte einer Recherche, Voraussetzungen für Online-Recherchen, Überblick über Arten von Datenbanken und über Hosts, Benutzerhilfen, Softwaretools, Retrievalsprachen und Kosten; Teil 2: Methoden des Information Retrieval: Einführung in die Methoden des Information Retrieval anhand ausgewählter Beispiele zu Retrievalsprachen, windows-basierten Retrievaltools und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen
  5. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2005) 0.04
    0.035118748 = product of:
      0.070237495 = sum of:
        0.032909684 = weight(_text_:web in 591) [ClassicSimilarity], result of:
          0.032909684 = score(doc=591,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 591, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=591)
        0.03732781 = weight(_text_:search in 591) [ClassicSimilarity], result of:
          0.03732781 = score(doc=591,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.21722981 = fieldWeight in 591, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=591)
      0.5 = coord(2/4)
    
    Abstract
    Im ersten Teil "Grundlagen des Information Retrieval" werden Schwerpunkte des Information Retrieval unter dem Aspekt ihrer Relevanz für die praktische Recherchedurchführung behandelt. Im zweiten Teil "Methoden des Information Retrieval" erfolgt eine umfassende Einführung in die verschiedenen Methoden des Information Retrieval anhand ausgewählter Retrievalsprachen und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen. Im dritten Teil "Fachbezogenes Information Retrieval" wird erstmalig in dieser Auflage das fachbezogene Information Retrieval mit den Schwerpunkten "Wirtschaftsinformation" und "Naturwissenschaftlich-technische Information" einbezogen.
    Footnote
    Rez. in: Information: Wissenschafft & Praxis 56(2005) H.5/6, S.337 (W. Ratzek): "Das zentrale Thema dieses Buches ist das Information Retrieval in Fachinformationsdatenbanken. Seit der ersten Auflage von 1998 liegt nun bereits eine aktualisierte 4. Auflage vor. Neu ist beispielsweise das Kapitel "Fachbezogenes Information Retrieval", das bisher in anderen Büchern der Schriftenreihe behandelt worden war. Die drei Teile des Buches behandeln - die "Grundlagen des Information Retrieval", d.h. u.a. Grundbegriffe, Arten und Anbieter von Datenbanken, Vorbereitung und Durchführung von Recherchen, Retrievalsprachen; - die "Methoden des Information Retrieval", hier geht es im Wesentlichen um die Anwendung und Funktion des Information Retrieval, d.h. Kommando-Retrieval, widowsbasierte Retrievaltools und Web-Search; - "Fachbezogenes Information Retrieval", wobei der Schwerpunkt auf der Wirtschaftsinformation liegt. Zur Gestaltung des Buches heißt es (S. 6): "Für die Darstellung der Inhalte wurde von Anfang an eine komprimierte Form gewählt, die den Studierenden zum einen in der gedruckten Buchausgabe als Begleitmaterial zur Lehre dienen soll und zum anderen die Grundlage für eine Online-Tutorial liefert, das sich gegenwärtig in der Testphase befindet." Damit sind Zielsetzung und Zielgruppe des Bandes benannt. Falls dieses Buch auch nicht-studentische Zielgruppen ansprechen soll, dann erscheint mir, aber auch einer Reihe von Kollegen, die Präsentationsform verbesserungswürdig. Die "komprimierte Form" erinnert an unkommentierte Vorlesungsfolien. Information Retrieval als Werkzeug für Recherchen in Fachinformationsdatenbanken erscheint vor dem Hintergrund der Diskussion über Informationsressourcen für das Wissensmanagements in Organisationen und deren Globalisierungstendenzen erweiterungsbedürftig. Das Konzept des Verlags, eine Schriftenreihe "Materialien zur Information und Dokumentation" herauszugeben, ist zu begrüßen."
  6. Rowley, J.E.; Farrow, J.: Organizing knowledge : an introduction to managing access to information (2000) 0.03
    0.03104088 = product of:
      0.06208176 = sum of:
        0.029088326 = weight(_text_:web in 2463) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2463,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
        0.032993436 = weight(_text_:search in 2463) [ClassicSimilarity], result of:
          0.032993436 = score(doc=2463,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
      0.5 = coord(2/4)
    
    Abstract
    For its third edition this standard text on knowledge organization and retrieval has been extensively revised and restructured to accommodate the increased significance of electronic information resources. With the help of many new sections on topics such as information retrieval via the Web, metadata and managing information retrieval systems, the book explains principles relating to hybrid print-based and electronic, networked environments experienced by today's users. Part I, Information Basics, explores the nature of information and knowledge and their incorporation into documents. Part II, Records, focuses specifically on electronic databases for accessing print or electronic media. Part III, Access, explores the range of tools for accessing information resources and covers interfaces, indexing and searching languages, classification, thesauri and catalogue and bibliographic access points. Finally, Part IV, Systems, describes the contexts through which knowledge can be organized and retrieved, including OPACs, the Internet, CD-ROMs, online search services and printed indexes and documents. This book is a comprehensive and accessible introduction to knowledge organization for both undergraduate and postgraduate students of information management and information systems
  7. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.03
    0.02639924 = product of:
      0.05279848 = sum of:
        0.042750936 = weight(_text_:web in 729) [ClassicSimilarity], result of:
          0.042750936 = score(doc=729,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.26496404 = fieldWeight in 729, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.010047545 = product of:
          0.02009509 = sum of:
            0.02009509 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.02009509 = score(doc=729,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In diesem Buch sollen die grundlegenden Techniken zur Erstellung, Anwendung und nicht zuletzt Darstellung von XML-Dokumenten erklärt und demonstriert werden. Die wichtigste und vornehmste Aufgabe dieses Buches ist es jedoch, die Grundlagen von XML, wie sie vom World Wide Web Consortium (W3C) festgelegt sind, darzustellen. Das W3C hat nicht nur die Entwicklung von XML initiiert und ist die zuständige Organisation für alle XML-Standards, es werden auch weiterhin XML-Spezifikationen vom W3C entwickelt. Auch wenn immer mehr Vorschläge für neue XML-basierte Techniken aus dem weiteren Umfeld der an XML Interessierten kommen, so spielt doch weiterhin das W3C die zentrale und wichtigste Rolle für die Entwicklung von XML. Der Schwerpunkt dieses Buches liegt darin, zu lernen, wie man XML als tragende Technologie in echten Alltags-Anwendungen verwendet. Wir wollen Ihnen gute Design-Techniken vorstellen und demonstrieren, wie man XML-fähige Anwendungen mit Applikationen für das WWW oder mit Datenbanksystemen verknüpft. Wir wollen die Grenzen und Möglichkeiten von XML ausloten und eine Vorausschau auf einige "nascent"-Technologien werfen. Egal ob Ihre Anforderungen sich mehr an dem Austausch von Daten orientieren oder bei der visuellen Gestaltung liegen, dieses Buch behandelt alle relevanten Techniken. jedes Kapitel enthält ein Anwendungsbeispiel. Da XML eine Plattform-neutrale Technologie ist, werden in den Beispielen eine breite Palette von Sprachen, Parsern und Servern behandelt. Jede der vorgestellten Techniken und Methoden ist auf allen Plattformen und Betriebssystemen relevant. Auf diese Weise erhalten Sie wichtige Einsichten durch diese Beispiele, auch wenn die konkrete Implementierung nicht auf dem von Ihnen bevorzugten System durchgeführt wurde.
    Dieses Buch wendet sich an alle, die Anwendungen auf der Basis von XML entwickeln wollen. Designer von Websites können neue Techniken erlernen, wie sie ihre Sites auf ein neues technisches Niveau heben können. Entwickler komplexerer Software-Systeme und Programmierer können lernen, wie XML in ihr System passt und wie es helfen kann, Anwendungen zu integrieren. XML-Anwendungen sind von ihrer Natur her verteilt und im Allgemeinen Web-orientiert. Dieses Buch behandelt nicht verteilte Systeme oder die Entwicklung von Web-Anwendungen, sie brauchen also keine tieferen Kenntnisse auf diesen Gebieten. Ein allgemeines Verständnis für verteilte Architekturen und Funktionsweisen des Web wird vollauf genügen. Die Beispiele in diesem Buch verwenden eine Reihe von Programmiersprachen und Technologien. Ein wichtiger Bestandteil der Attraktivität von XML ist seine Plattformunabhängigkeit und Neutralität gegenüber Programmiersprachen. Sollten Sie schon Web-Anwendungen entwickelt haben, stehen die Chancen gut, dass Sie einige Beispiele in Ihrer bevorzugten Sprache finden werden. Lassen Sie sich nicht entmutigen, wenn Sie kein Beispiel speziell für Ihr System finden sollten. Tools für die Arbeit mit XML gibt es für Perl, C++, Java, JavaScript und jede COM-fähige Sprache. Der Internet Explorer (ab Version 5.0) hat bereits einige Möglichkeiten zur Verarbeitung von XML-Dokumenten eingebaut. Auch der Mozilla-Browser (der Open-Source-Nachfolger des Netscape Navigators) bekommt ähnliche Fähigkeiten. XML-Tools tauchen auch zunehmend in großen relationalen Datenbanksystemen auf, genau wie auf Web- und Applikations-Servern. Sollte Ihr System nicht in diesem Buch behandelt werden, lernen Sie die Grundlagen und machen Sie sich mit den vorgestellten Techniken aus den Beispielen vertraut.
    Date
    22. 6.2005 15:12:11
  8. Vonhoegen, H.: Einstieg in XML (2002) 0.02
    0.02349493 = product of:
      0.04698986 = sum of:
        0.03526772 = weight(_text_:web in 4002) [ClassicSimilarity], result of:
          0.03526772 = score(doc=4002,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21858418 = fieldWeight in 4002, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4002)
        0.011722136 = product of:
          0.023444273 = sum of:
            0.023444273 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.023444273 = score(doc=4002,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  9. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.02
    0.022917118 = product of:
      0.045834236 = sum of:
        0.029088326 = weight(_text_:web in 5773) [ClassicSimilarity], result of:
          0.029088326 = score(doc=5773,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.03349182 = score(doc=5773,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Suchmaschinen im World Wide Web wird nachgesagt, dass sie - insbesondere im Vergleich zur Retrievalsoftware kommerzieller Online-Archive suboptimale Methoden und Werkzeuge einsetzen. Elaborierte befehlsorientierte Retrievalsysteme sind vom Laien gar nicht und vom Professional nur dann zu bedienen, wenn man stets damit arbeitet. Die Suchsysteme einiger "independents", also isolierter Informationsproduzenten im Internet, zeichnen sich durch einen Minimalismus aus, der an den Befehlsumfang anfangs der 70er Jahre erinnert. Retrievalsoftware in Intranets, wenn sie denn überhaupt benutzt wird, setzt fast ausnahmslos auf automatische Methoden von Indexierung und Retrieval und ignoriert dabei nahezu vollständig dokumentarisches Know how. Suchmaschinen bzw. Retrievalsysteme - wir wollen beide Bezeichnungen synonym verwenden - bereiten demnach, egal wo sie vorkommen, Schwierigkeiten. An ihrer Qualität wird gezweifelt. Aber was heißt überhaupt: Qualität von Suchmaschinen? Was zeichnet ein gutes Retrievalsystem aus? Und was fehlt einem schlechten? Wir wollen eine Liste von Kriterien entwickeln, die für gutes Suchen (und Finden!) wesentlich sind. Es geht also ausschließlich um Quantität und Qualität der Suchoptionen, nicht um weitere Leistungsindikatoren wie Geschwindigkeit oder ergonomische Benutzerschnittstellen. Stillschweigend vorausgesetzt wirdjedoch der Abschied von ausschließlich befehlsorientierten Systemen, d.h. wir unterstellen Bildschirmgestaltungen, die die Befehle intuitiv einleuchtend darstellen. Unsere Checkliste enthält nur solche Optionen, die entweder (bei irgendwelchen Systemen) schon im Einsatz sind (und wiederholt damit zum Teil Altbekanntes) oder deren technische Realisierungsmöglichkeit bereits in experimentellen Umgebungen aufgezeigt worden ist. insofern ist die Liste eine Minimalforderung an Retrievalsysteme, die durchaus erweiterungsfähig ist. Gegliedert wird der Kriterienkatalog nach (1.) den Basisfunktionen zur Suche singulärer Datensätze, (2.) den informetrischen Funktionen zur Charakterisierunggewisser Nachweismengen sowie (3.) den Kriterien zur Mächtigkeit automatischer Indexierung und natürlichsprachiger Suche
    Source
    Password. 2000, H.5, S.22-31
  10. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.02
    0.018333694 = product of:
      0.036667388 = sum of:
        0.023270661 = weight(_text_:web in 3487) [ClassicSimilarity], result of:
          0.023270661 = score(doc=3487,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.14422815 = fieldWeight in 3487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3487)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.026793454 = score(doc=3487,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Information: Wissenschaft & Praxis 56(2005) H.5/6, S.337 (W. Ratzek): "Bettina Brühl legt mit "Thesauri und Klassifikationen" ein Fleißarbeit vor. Das Buch mit seiner Auswahl von über 150 Klassifikationen und Thesauri aus Naturwissenschaft, Technik, Wirtschaft und Patenwesen macht es zu einem brauchbaren Nachschlagewerk, zumal auch ein umfassender Index nach Sachgebieten, nach Datenbanken und nach Klassifikationen und Thesauri angeboten wird. Nach einer 13-seitigen Einführung (Kapitel 1 und 2) folgt mit dem 3. Kapitel die "Darstellung von Klassifikationen und Thesauri", im wesentlichen aus den Beschreibungen der Hersteller zusammengestellt. Hier werden Dokumentationssprachen der Fachgebiete - Naturwissenschaften (3.1) und deren Spezialisierungen wie zum Beispiel "Biowissenschaften und Biotechnologie", "Chemie" oder "Umwelt und Ökonomie", aber auch "Mathematik und Informatik" (?) auf 189 Seiten vorgestellt, - Technik mit zum Beispiel "Fachordnung Technik", "Subject Categories (INIS/ ETDE) mit 17 Seiten verhältnismäßig knapp abgehandelt, - Wirtschaft mit "Branchen-Codes", "Product-Codes", "Länder-Codes"",Fachklas-sifikationen" und "Thesauri" ausführlich auf 57 Seiten präsentiert, - Patente und Normen mit zum Beispiel "Europäische Patentklassifikation" oder "International Patent Classification" auf 33 Seiten umrissen. Jedes Teilgebiet wird mit einer kurzen Beschreibung eingeleitet. Danach folgen die jeweiligen Beschreibungen mit den Merkmalen: "Anschrift des Erstellers", "Themen-gebiet(e)", "Sprache", "Verfügbarkeit", "An-wendung" und "Ouelle(n)". "Das Buch wendet sich an alle Information Professionals, die Dokumentationssprachen aufbauen und nutzen" heißt es in der Verlagsinformation. Zwar ist es nicht notwendig, die informationswissenschaftlichen Aspekte der Klassifikationen und Thesauri abzuhandeln, aber ein Hinweis auf die Bedeutung der Information und Dokumentation und/oder der Informationswissenschaft wäre schon angebracht, um in der Welt der Informations- und Wissenswirtschaft zu demonstrieren, welchen Beitrag unsere Profession leistet. Andernfalls bleibt das Blickfeld eingeschränkt und der Anschluss an neuere Entwicklungen ausgeblendet. Dieser Anknüpfungspunkt wäre beispielsweise durch einen Exkurs über Topic Map/Semantic Web gegeben. Der Verlag liefert mit der Herausgabe die ses Kompendiums einen nützlichen ersten Baustein zu einem umfassenden Verzeichnis von Thesauri und Klassifikationen."
    Series
    Materialien zur Information und Dokumentation; Bd.22
  11. Batley, S.: Classification in theory and practice (2005) 0.02
    0.01667518 = product of:
      0.03335036 = sum of:
        0.020152984 = weight(_text_:web in 1170) [ClassicSimilarity], result of:
          0.020152984 = score(doc=1170,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.12490524 = fieldWeight in 1170, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=1170)
        0.0131973745 = weight(_text_:search in 1170) [ClassicSimilarity], result of:
          0.0131973745 = score(doc=1170,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.076802336 = fieldWeight in 1170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=1170)
      0.5 = coord(2/4)
    
    Abstract
    This book examines a core topic in traditional librarianship: classification. Classification has often been treated as a sub-set of cataloguing and indexing with relatively few basic textbooks concentrating solely an the theory and practice of classifying resources. This book attempts to redress the balance somewhat. The aim is to demystify a complex subject, by providing a sound theoretical underpinning, together with practical advice and promotion of practical skills. The text is arranged into five chapters: Chapter 1: Classification in theory and practice. This chapter explores theories of classification in broad terms and then focuses an the basic principles of library classification, introducing readers to technical terminology and different types of classification scheme. The next two chapters examine individual classification schemes in depth. Each scheme is explained using frequent examples to illustrate basic features. Working through the exercises provided should be enjoyable and will enable readers to gain practical skills in using the three most widely used general library classification schemes: Dewey Decimal Classification, Library of Congress Classification and Universal Decimal Classification. Chapter 2: Classification schemes for general collections. Dewey Decimal and Library of Congress classifications are the most useful and popular schemes for use in general libraries. The background, coverage and structure of each scheme are examined in detail in this chapter. Features of the schemes and their application are illustrated with examples. Chapter 3: Classification schemes for specialist collections. Dewey Decimal and Library of Congress may not provide sufficient depth of classification for specialist collections. In this chapter, classification schemes that cater to specialist needs are examined. Universal Decimal Classification is superficially very much like Dewey Decimal, but possesses features that make it a good choice for specialist libraries or special collections within general libraries. It is recognised that general schemes, no matter how deep their coverage, may not meet the classification needs of some collections. An answer may be to create a special classification scheme and this process is examined in detail here. Chapter 4: Classifying electronic resources. Classification has been reborn in recent years with an increasing need to organise digital information resources. A lot of work in this area has been conducted within the computer science discipline, but uses basic principles of classification and thesaurus construction. This chapter takes a broad view of theoretical and practical issues involved in creating classifications for digital resources by examining subject trees, taxonomies and ontologies. Chapter 5: Summary. This chapter provides a brief overview of concepts explored in depth in previous chapters. Development of practical skills is emphasised throughout the text. It is only through using classification schemes that a deep understanding of their structure and unique features can be gained. Although all the major schemes covered in the text are available an the Web, it is recommended that hard-copy versions are used by those wishing to become acquainted with their overall structure. Recommended readings are supplied at the end of each chapter and provide useful sources of additional information and detail. Classification demands precision and the application of analytical skills, working carefully through the examples and the practical exercises should help readers to improve these faculties. Anyone who enjoys cryptic crosswords should recognise a parallel: classification often involves taking the meaning of something apart and then reassembling it in a different way.
    Footnote
    - Similarly, there is very little space provided to the thorny issue of subject analysis, which is at the conceptual core of classification work of any kind. The author's recommendations are practical, and do not address the subjective nature of this activity, nor the fundamental issues of how the classification schemes are interpreted and applied in diverse contexts, especially with respect to what a work "is about." - Finally, there is very little about practical problem solving - stories from the trenches as it were. How does a classifier choose one option over another when both seem plausible, even given that he or she has done a user and task analysis? How do classifiers respond to rapid or seemingly impulsive change? How do we evaluate the products of our work? How do we know what is the "correct" solution, even if we work, as most of us do, assuming that this is an elusive goal, but we try our best anyway? The least satisfying section of the book is the last, where the author proposes some approaches to organizing electronic resources. The suggestions seem to be to more or less transpose and adapt skills and procedures from the world of organizing books an shelves to the virtual hyperlinked world of the Web. For example, the author states (p. 153-54): Precise classification of documents is perhaps not as crucial in the electronic environment as it is in the traditional library environment. A single document can be linked to and retrieved via several different categories to allow for individual needs and expertise. However, it is not good practice to overload the system with links because that will affect its use. Effort must be made to ensure that inappropriate or redundant links are not included. The point is well taken: too muck irrelevant information is not helpful. At the same time an important point concerning the electronic environment has been overlooked as well: redundancy is what relieves the user from making precise queries or knowing the "right" place for launching a search, and redundancy is what is so natural an the Web. These are small objections, however. Overall the book is a carefully crafted primer that gives the student a strong foundation an which to build further understanding. There are well-chosen and accessible references for further reading. I world recommend it to any instructor as an excellent starting place for deeper analysis in the classroom and to any student as an accompanying text to the schedules themselves."
  12. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.02
    0.01663816 = product of:
      0.03327632 = sum of:
        0.010180915 = weight(_text_:web in 6119) [ClassicSimilarity], result of:
          0.010180915 = score(doc=6119,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.06309982 = fieldWeight in 6119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.023095407 = weight(_text_:search in 6119) [ClassicSimilarity], result of:
          0.023095407 = score(doc=6119,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.1344041 = fieldWeight in 6119, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
      0.5 = coord(2/4)
    
    Footnote
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  13. Bowman, J.H.: Essential Dewey (2005) 0.01
    0.014925784 = product of:
      0.029851569 = sum of:
        0.016454842 = weight(_text_:web in 359) [ClassicSimilarity], result of:
          0.016454842 = score(doc=359,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.1019847 = fieldWeight in 359, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=359)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 359) [ClassicSimilarity], result of:
              0.026793454 = score(doc=359,freq=8.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 359, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=359)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this book, John Bowman provides an introduction to the Dewey Decimal Classification suitable either for beginners or for librarians who are out of practice using Dewey. He outlines the content and structure of the scheme and then, through worked examples using real titles, Shows readers how to use it. Most chapters include practice exercises, to which answers are given at the end of the book. A particular feature of the book is the chapter dealing with problems of specific parts of the scheme. Later chapters offer advice and how to cope with compound subjects, and a brief introduction to the Web version of Dewey.
    Content
    "The contents of the book cover: This book is intended as an introduction to the Dewey Decimal Classification, edition 22. It is not a substitute for it, and I assume that you have it, all four volumes of it, by you while reading the book. I have deliberately included only a short section an WebDewey. This is partly because WebDewey is likely to change more frequently than the printed version, but also because this book is intended to help you use the scheme regardless of the manifestation in which it appears. If you have a subscription to WebDewey and not the printed volumes you may be able to manage with that, but you may then find my references to volumes and page numbers baffling. All the examples and exercises are real; what is not real is the idea that you can classify something without seeing more than the title. However, there is nothing that I can do about this, and I have therefore tried to choose examples whose titles adequately express their subject-matter. Sometimes when you look at the 'answers' you may feel that you have been cheated, but I hope that this will be seldom. Two people deserve special thanks. My colleague Vanda Broughton has read drafts of the book and made many suggestions. Ross Trotter, chair of the CILIP Dewey Decimal Classification Committee, who knows more about Dewey than anyone in Britain today, has commented extensively an it and as far as possible has saved me from error, as well as suggesting many improvements. What errors remain are due to me alone. Thanks are also owed to OCLC Online Computer Library Center, for permission to reproduce some specimen pages of DDC 22. Excerpts from the Dewey Decimal Classification are taken from the Dewey Decimal Classification and Relative Index, Edition 22 which is Copyright 2003 OCLC Online Computer Library Center, Inc. DDC, Dewey, Dewey Decimal Classification and WebDewey are registered trademarks of OCLC Online Computer Library Center, Inc."
    Footnote
    "The title says it all. The book contains the essentials for a fundamental understanding of the complex world of the Dewey Decimal Classification. It is clearly written and captures the essence in a concise and readable style. Is it a coincidence that the mysteries of the Dewey Decimal System are revealed in ten easy chapters? The typography and layout are clear and easy to read and the perfect binding withstood heavy use. The exercises and answers are invaluable in illustrating the points of the several chapters. The book is well structured. Chapter 1 provides an "Introduction and background" to classification in general and Dewey in particular. Chapter 2 describes the "Outline of the scheme" and the conventions in the schedules and tables. Chapter 3 covers "Simple subjects" and introduces the first of the exercises. Chapters 4 and 5 describe "Number-building" with "standard subdivisions" in the former and "other methods" in the latter. Chapter 6 provides an excellent description of "Preference order" and Chapter 7 deals with "Exceptions and options." Chapter 8 "Special subjects," while no means exhaustive, gives a thorough analysis of problems with particular parts of the schedules from "100 Philosophy" to "910 Geography" with a particular discussion of "'Persons treatment"' and "Optional treatment of biography." Chapter 9 treats "Compound subjects." Chapter 10 briefly introduces WebDewey and provides the URL for the Web Dewey User Guide http://www.oclc.org/support/documentation/dewey/ webdewey_userguide/; the section for exercises says: "You are welcome to try using WebDewey an the exercises in any of the preceding chapters." Chapters 6 and 7 are invaluable at clarifying the options and bases for choice when a work is multifaceted or is susceptible of classification under different Dewey Codes. The recommendation "... not to adopt options, but use the scheme as instructed" (p. 71) is clearly sound. As is, "What is vital, of course, is that you keep a record of the decisions you make and to stick to them. Any option Chosen must be used consistently, and not the whim of the individual classifier" (p. 71). The book was first published in the UK and the British overtones, which may seem quite charming to a Canadian, may be more difficult for readers from the United States. The correction of Dewey's spelling of Labor to Labo [u] r (p. 54) elicited a smile for the championing of lost causes and some relief that we do not have to cope with 'simplified speling.' The down-to-earth opinions of the author, which usually agree with those of the reviewer, add savour to the text and enliven what might otherwise have been a tedious text indeed. However, in the case of (p. 82):
    Object
    DDC-22
  14. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.01
    0.011722136 = product of:
      0.046888545 = sum of:
        0.046888545 = product of:
          0.09377709 = sum of:
            0.09377709 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
              0.09377709 = score(doc=3247,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.5416616 = fieldWeight in 3247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3247)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  15. Oberhauser, O.: Automatisches Klassifizieren : Verfahren zur Erschließung elektronischer Dokumente (2004) 0.01
    0.010076492 = product of:
      0.04030597 = sum of:
        0.04030597 = weight(_text_:web in 2487) [ClassicSimilarity], result of:
          0.04030597 = score(doc=2487,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.24981049 = fieldWeight in 2487, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
      0.25 = coord(1/4)
    
    Abstract
    Automatisches Klassifizieren von Textdokumenten bedeutet die maschinelle Zuordnung jeweils einer oder mehrerer Notationen eines vorgegebenen Klassifikationssystems zu natürlich-sprachlichen Texten mithilfe eines geeigneten Algorithmus. In der vorliegenden Arbeit wird in Form einer umfassenden Literaturstudie ein aktueller Kenntnisstand zu den Ein-satzmöglichkeiten des automatischen Klassifizierens für die sachliche Erschliessung von elektronischen Dokumenten, insbesondere von Web-Ressourcen, erarbeitet. Dies betrifft zum einen den methodischen Aspekt und zum anderen die in relevanten Projekten und Anwendungen gewonnenen Erfahrungen. In methodischer Hinsicht gelten heute statistische Verfahren, die auf dem maschinellen Lernen basieren und auf der Grundlage bereits klassifizierter Beispieldokumente ein Modell - einen "Klassifikator" - erstellen, das zur Klassifizierung neuer Dokumente verwendet werden kann, als "state-of-the-art". Die vier in den 1990er Jahren an den Universitäten Lund, Wolverhampton und Oldenburg sowie bei OCLC (Dublin, OH) durchgeführten "grossen" Projekte zum automatischen Klassifizieren von Web-Ressourcen, die in dieser Arbeit ausführlich analysiert werden, arbeiteten allerdings noch mit einfacheren bzw. älteren methodischen Ansätzen. Diese Projekte bedeuten insbesondere aufgrund ihrer Verwendung etablierter bibliothekarischer Klassifikationssysteme einen wichtigen Erfahrungsgewinn, selbst wenn sie bisher nicht zu permanenten und qualitativ zufriedenstellenden Diensten für die Erschliessung elektronischer Ressourcen geführt haben. Die Analyse der weiteren einschlägigen Anwendungen und Projekte lässt erkennen, dass derzeit in den Bereichen Patent- und Mediendokumentation die aktivsten Bestrebungen bestehen, Systeme für die automatische klassifikatorische Erschliessung elektronischer Dokumente im laufenden operativen Betrieb einzusetzen. Dabei dominieren jedoch halbautomatische Systeme, die menschliche Bearbeiter durch Klassifizierungsvorschläge unterstützen, da die gegenwärtig erreichbare Klassifizierungsgüte für eine Vollautomatisierung meist noch nicht ausreicht. Weitere interessante Anwendungen und Projekte finden sich im Bereich von Web-Portalen, Suchmaschinen und (kommerziellen) Informationsdiensten, während sich etwa im Bibliothekswesen kaum nennenswertes Interesse an einer automatischen Klassifizierung von Büchern bzw. bibliographischen Datensätzen registrieren lässt. Die Studie schliesst mit einer Diskussion der wichtigsten Projekte und Anwendungen sowie einiger im Zusammenhang mit dem automatischen Klassifizieren relevanter Fragestellungen und Themen.
  16. Kaushik, S.K.: DDC 22 : a practical approach (2004) 0.01
    0.008861102 = product of:
      0.03544441 = sum of:
        0.03544441 = product of:
          0.07088882 = sum of:
            0.07088882 = weight(_text_:22 in 1842) [ClassicSimilarity], result of:
              0.07088882 = score(doc=1842,freq=14.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.4094577 = fieldWeight in 1842, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A system of library classification that flashed across the inquiring mind of young Melvil Louis Kossuth Dewey (known as Melvil Dewey) in 1873 is still the most popular classification scheme.. The modern library classification begins with Dewey Decimal Classification (DDC). Melvil Dewey devised DDC in 1876. DDC has is credit of 128 years of boudless success. The DDC is being taught as a practical subject throughout the world and it is being used in majority of libraries in about 150 countries. This is the result of continuous revision that 22nd Edition of DDC has been published in July 2003. No other classification scheme has published so many editions. Some welcome changes have been made in DDC 22. To reduce the Christian bias in 200 religion, the numbers 201 to 209 have been devoted to specific aspects of religion. In the previous editions these numbers were devoted to Christianity. to enhance the classifier's efficiency, Table 7 has been removed from DDC 22 and the provision of adding group of persons is made by direct use of notation already available in schedules and in notation -08 from Table 1 Standard Subdivision. The present book is an attempt to explain, with suitable examples, the salient provisions of DDC 22. The book is written in simple language so that the students may not face any difficulty in understanding what is being explained. The examples in the book are explained in a step-by-step procedure. It is hoped that this book will prove of great help and use to the library professionals in general and library and information science students in particular.
    Content
    1. Introduction to DDC 22 2. Major changes in DDC 22 3. Introduction to the schedules 4. Use of Table 1 : Standard Subdivisions 5. Use of Table 2 : Areas 6. Use of Table 3 : Subdivisions for the arts, for individual literatures, for specific literary forms 7. Use to Table 4 : Subdivisions of individual languages and language families 8. Use of Table 5 : Ethic and National groups 9. Use of Table 6 : Languages 10. Treatment of Groups of Persons
    Object
    DDC-22
  17. Scott, M.L.: Dewey Decimal Classification, 22nd edition : a study manual and number building guide (2005) 0.01
    0.008372955 = product of:
      0.03349182 = sum of:
        0.03349182 = product of:
          0.06698364 = sum of:
            0.06698364 = weight(_text_:22 in 4594) [ClassicSimilarity], result of:
              0.06698364 = score(doc=4594,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.38690117 = fieldWeight in 4594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4594)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  18. Understanding metadata (2004) 0.01
    0.0066983635 = product of:
      0.026793454 = sum of:
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.053586908 = score(doc=2686,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10. 9.2004 10:22:40
  19. Oberhauser, O.: Automatisches Klassifizieren : Entwicklungsstand - Methodik - Anwendungsbereiche (2005) 0.01
    0.006297807 = product of:
      0.025191229 = sum of:
        0.025191229 = weight(_text_:web in 38) [ClassicSimilarity], result of:
          0.025191229 = score(doc=38,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.15613155 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=38)
      0.25 = coord(1/4)
    
    Abstract
    Automatisches Klassifizieren von Textdokumenten bedeutet die maschinelle Zuordnung jeweils einer oder mehrerer Notationen eines vorgegebenen Klassifikationssystems zu natürlich-sprachlichen Texten mithilfe eines geeigneten Algorithmus. In der vorliegenden Arbeit wird in Form einer umfassenden Literaturstudie ein aktueller Kenntnisstand zu den Ein-satzmöglichkeiten des automatischen Klassifizierens für die sachliche Erschliessung von elektronischen Dokumenten, insbesondere von Web-Ressourcen, erarbeitet. Dies betrifft zum einen den methodischen Aspekt und zum anderen die in relevanten Projekten und Anwendungen gewonnenen Erfahrungen. In methodischer Hinsicht gelten heute statistische Verfahren, die auf dem maschinellen Lernen basieren und auf der Grundlage bereits klassifizierter Beispieldokumente ein Modell - einen "Klassifikator" - erstellen, das zur Klassifizierung neuer Dokumente verwendet werden kann, als "state-of-the-art". Die vier in den 1990er Jahren an den Universitäten Lund, Wolverhampton und Oldenburg sowie bei OCLC (Dublin, OH) durchgeführten "grossen" Projekte zum automatischen Klassifizieren von Web-Ressourcen, die in dieser Arbeit ausführlich analysiert werden, arbeiteten allerdings noch mit einfacheren bzw. älteren methodischen Ansätzen. Diese Projekte bedeuten insbesondere aufgrund ihrer Verwendung etablierter bibliothekarischer Klassifikationssysteme einen wichtigen Erfahrungsgewinn, selbst wenn sie bisher nicht zu permanenten und qualitativ zufriedenstellenden Diensten für die Erschliessung elektronischer Ressourcen geführt haben. Die Analyse der weiteren einschlägigen Anwendungen und Projekte lässt erkennen, dass derzeit in den Bereichen Patent- und Mediendokumentation die aktivsten Bestrebungen bestehen, Systeme für die automatische klassifikatorische Erschliessung elektronischer Dokumente im laufenden operativen Betrieb einzusetzen. Dabei dominieren jedoch halbautomatische Systeme, die menschliche Bearbeiter durch Klassifizierungsvorschläge unterstützen, da die gegenwärtig erreichbare Klassifizierungsgüte für eine Vollautomatisierung meist noch nicht ausreicht. Weitere interessante Anwendungen und Projekte finden sich im Bereich von Web-Portalen, Suchmaschinen und (kommerziellen) Informationsdiensten, während sich etwa im Bibliothekswesen kaum nennenswertes Interesse an einer automatischen Klassifizierung von Büchern bzw. bibliographischen Datensätzen registrieren lässt. Die Studie schliesst mit einer Diskussion der wichtigsten Projekte und Anwendungen sowie einiger im Zusammenhang mit dem automatischen Klassifizieren relevanter Fragestellungen und Themen.
  20. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2006) 0.00
    0.0043632486 = product of:
      0.017452994 = sum of:
        0.017452994 = weight(_text_:web in 592) [ClassicSimilarity], result of:
          0.017452994 = score(doc=592,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.108171105 = fieldWeight in 592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Online-Mitteilungen 2006, H.88, S.13-15 [=Mitteilungen VOEB 59(2006) H.4] (M. Katzmayr): "Dieses Lehrbuch nun in der 5., völlig neu bearbeiteten Auflage vorliegend - hat zum Ziel, eine praxisorientierte Einführung in das Information Retrieval (IR) zu liefern. Es stellt gemeinsam mit den von derselben Autorin verfassten fachbezogenen Bänden "Wirtschaftsinformation: Online, CD-ROM, Internet" und "Naturwissenschaftlich-technische Information: Online,, CD-ROM, Internet" eine dreiteilige Gesamtausgabe zum IR dar. Der hier besprochene einführende Band gliedert sich in Grundlagen, Methoden und fachbezogene Aspekte (letzteres Kapitel wird in den erwähnten ergänzenden Bänden vertiefend behandelt). Dass es sich bei diesem Band um ein Lehrbuch handelt, wird nicht zuletzt durch Wiederholungsfragen am Ende jedes Kapitels, Rechercheübungen und einige Hausübungen verdeutlicht. Der Schwerpunkt liegt bei lizenzpflichtigen OnlineDatenbanken, das Web Information Retrieval wird nicht behandelt. Das erste Kapitel, "Grundlagen des Information Retrieval", vermittelt ein Basiswissen rund um Recherchedatenbanken und ihren Einsatz, etwa wie Datenbanken gegliedert und einheitlich beschrieben werden können, wie Datensätze in Abhängigkeit der gespeicherten Informationen üblicherweise strukturiert sind, welche Arbeitsschritte eine Recherche typischerweise aufweist oder wie sich die Kosten einer Online-Recherche kategorisieren lassen. Schließlich wird auch eine knappe Marktübersicht wichtiger kommerzieller Datenbankanbieter gegeben. .Im folgenden Kapitel, "Methoden des Information Retrieval", wird das Kommandoretrieval anhand der Abfragesprache DataStarOnline (DSO), die beim Host Dialog DataStar zur Anwendung kommt, erklärt. Neben Grundfunktionen wie Datenbankeinwahl und -wechsel werden die Verwendung von Such und Näheoperatoren, Trunkierung, Limitierung und Befehle zur Anzeige und Ausgabe der Suchergebnisse sowie ausgewählte spezielle Funktionen ausführlich dargestellt. Anschließend findet sich eine mit Screenshots dokumentierte Anleitung zur Benutzung der Websuchoberflächen des Hosts.

Languages

  • e 14
  • d 11

Types

  • m 22
  • a 1
  • el 1
  • x 1
  • More… Less…