Search (53 results, page 1 of 3)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Kaiser, U.: Handbuch Internet und Online Dienste : der kompetente Reiseführer für das digitale Netz (1996) 0.07
    0.06769611 = product of:
      0.23693638 = sum of:
        0.20333329 = weight(_text_:europe in 4589) [ClassicSimilarity], result of:
          0.20333329 = score(doc=4589,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.8075714 = fieldWeight in 4589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
        0.033603087 = product of:
          0.067206174 = sum of:
            0.067206174 = weight(_text_:22 in 4589) [ClassicSimilarity], result of:
              0.067206174 = score(doc=4589,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.46428138 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Object
    Europe Online
    Series
    Heyne Business; 22/1019
  2. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.05
    0.049423523 = product of:
      0.08649116 = sum of:
        0.012971659 = weight(_text_:management in 468) [ClassicSimilarity], result of:
          0.012971659 = score(doc=468,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.09310089 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.0423611 = weight(_text_:europe in 468) [ClassicSimilarity], result of:
          0.0423611 = score(doc=468,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.16824403 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.02206862 = weight(_text_:case in 468) [ClassicSimilarity], result of:
          0.02206862 = score(doc=468,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.121434934 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.009089783 = product of:
          0.018179566 = sum of:
            0.018179566 = weight(_text_:studies in 468) [ClassicSimilarity], result of:
              0.018179566 = score(doc=468,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.110216804 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
  3. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.05
    0.049209043 = product of:
      0.08611582 = sum of:
        0.012841288 = weight(_text_:management in 6119) [ClassicSimilarity], result of:
          0.012841288 = score(doc=6119,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.09216518 = fieldWeight in 6119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.02965277 = weight(_text_:europe in 6119) [ClassicSimilarity], result of:
          0.02965277 = score(doc=6119,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.11777083 = fieldWeight in 6119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.030896068 = weight(_text_:case in 6119) [ClassicSimilarity], result of:
          0.030896068 = score(doc=6119,freq=8.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.17000891 = fieldWeight in 6119, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.013671875 = fieldNorm(doc=6119)
        0.012725695 = product of:
          0.02545139 = sum of:
            0.02545139 = weight(_text_:studies in 6119) [ClassicSimilarity], result of:
              0.02545139 = score(doc=6119,freq=8.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15430352 = fieldWeight in 6119, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Footnote
    Chapter 2 examines the variety and breadth of DL implementations and collections through a well-balanced selection of 20 DLs. The authors make a useful classification of the various types of DLs into seven categories and give a brief synopsis of two or three examples from each category. These categories include historical, national, and university DLs, as well as DLs for special materials and research. Chapter 3 examines research efforts in digital libraries, concentrating an the three eLib initiatives in the UK and the two Digital Libraries Initiatives in the United States. The chapter also offers some details an joint research between the UK and the United States (the NSF/JISC jointly funded programs), Europe, Canada, Australia, and New Zealand. While both of these chapters do an admirable job of surveying the DL landscape, the breadth and variety of materials need to be encapsulated in a coherent summary that illustrates the commonality of their approaches and their key differences that have been driven by aspects of their collections and audience. Unfortunately, this summary aspect is lacking here and elsewhere in the book. Chapter 2 does an admirable job of DL selection that showcases the variety of existing DLs, but 1 feel that Chapter 3's selection of research projects could be improved. The chapter's emphasis is clearly an UK-based research, devoting nine pages to it compared to six for EU-funded projects. While this emphasis could be favorable for UK courses, it hampers the chances of the text's adoption in other courses internationally. Chapter 4 begins the core part of the book by examining the DL from a design perspective. As a well-designed DL encompasses various practical and theoretical considerations, the chapter introduces much of the concepts that are elaborated an in later chapters. The Kahn/Wilensky and Lagoze/Fielding architectures are summarized in bullet points, and specific aspects of these frameworks are elaborated on. These include the choice between a federated or centralized search architecture (referencing Virginia Tech's NDLTD and Waikato's Greenstone) and level of interoperability (discussing UNIMARC and metadata harvesting). Special attention is paid to hybrid library design, with references to UK projects. A useful summary of recommended standards for DL design concludes the chapter.
    Chapters 5 through 9 discuss the basic facets of DL implementation and use. Chapter 5, entitled "Collection management," distinguishes collection management from collection development. The authors give source selection criteria, distilled from Clayton and Gorman. The text then discusses the characteristics of several digital sources, including CD-ROMs, electronic books, electronic journals, and databases, and elaborates an the distribution and pricing issues involved in each. However, the following chapter an digitization is quite disappointing; 1 feel that its discussion is shallow and short, and offers only a glimpse of the difficulties of this task. The chapter contains a listing of multimedia file formats, which is explained clearly, omitting technical jargon. However, it could be improved by including more details about each fonnat's optimal use. Chapter 7, "Information organization, " surveys several DLs and highlights their adaptation of traditional classification and cataloging techniques. The chapter continues with a brief introduction to metadata, by first defining it and then discussiog major standards: the Dublin Core, the Warwick Framework and EAD. A discussion of markup languages such as SGML, HTML, and XML rounds off the chapter. A more engaging chapter follows. Dealing with information access and user interfaces, it begins by examining information needs and the seeking process, with particular attention to the difficulties of translating search needs into an actual search query. Guidelines for user interface design are presented, distilled from recommendations from Shneiderman, Byrd, and Croft. Some research user interfaces are highlighted to hint at the future of information finding, and major features of browsing and searching interfaces are shown through case studies of a number of DLs. Chapter 9 gives a layman's introduction to the classic models of information retrieval, and is written to emphasize each model's usability and features; the mathematical foundations have entirely been dispensed with. Multimedia retrieval, Z39.50, and issues with OPAC integration are briefly sketched, but details an the approaches to these problems are omitted. A dissatisfying chapter an preservation begins the third part an deployed DLs, which itemizes several preservation projects but does not identify the key points of each project. This weakness is offset by two solid chapters an DL services and social, economic, and legal issues. Here, the writing style of the text is more effective in surveying the pertinent issues. Chowdhury and Chowdhury write, " The importance of [reference] services has grown over time with the introduction of new technologies and services in libraries" (p. 228), emphasizing the central role that reference services have in DLs, and go an to discuss both free and fee-based services, and those housed as part of libraries as well as commercial services. The chapter an social issues examines the digital divide and also gives examples of institutions working to undo the divide: "Blackwells is making all 600 of its journals freely available to institutions within the Russian Federation" (p. 252). Key points in cost-models of electronic publishing and intellectual property rights are also discussed. Chowdhury and Chowdhury mention that "there is no legal deposit law to force the creators of digital information to submit a copy of every work to one or more designated institutions" for preservation (p. 265).
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.
    Another weakness of the book is its favoritism towards the authors' own works. To a large extent, this bias is natural as the authors know their own works best. However, in an introductory text, it is critical to reference the most appropriate source and give a balanced view of the field. In this respect, 1 feel the book could be more objective in its selection of references and research projects. Introduction to Digital Libraries is definitely a book written for a purpose. LIS undergraduates and "practicing professionals who need to know about recent developments in the field of digital libraries" (p. xiv) will find this book a fine introduction, as it is clearly written and accessible to laymen, giving explanations without delving into terminology and math. As it surveys a large number of projects, it is also an ideal starting point for students to pick and investigate particular DL research projects. However, graduate LIS students who already have a solid understanding of library fundamentals as well as Computer science students may find this volume lacking in details. Alternative texts such as Lesk (1999) and Arms (2000) are possibly more suitable for those who need to investigate topics in depth. For the experienced practitioner or researcher delving into the DL field for the first time, the recent 2002 ARIST chapter by Fox and Urs may also be a suitable alternative. In their introduction, the authors ask, "What are digital libraries? How do they differ from online databases and search services? Will they replace print libraries? What impact will they have an people and the society?" (p. 3). To answer these questions, Chowdhury and Chowdhury offer a multitude of case studies to let the audience draw their own conclusions. To this end, it is my opinion that Introduction to Digital Libraries serves a useful purpose as a supplemental text in the digital library Curriculum but misses the mark of being an authoritative textbook."
  4. Jasper, D.: Alles über Online (1996) 0.03
    0.033888884 = product of:
      0.23722216 = sum of:
        0.23722216 = weight(_text_:europe in 4933) [ClassicSimilarity], result of:
          0.23722216 = score(doc=4933,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.9421666 = fieldWeight in 4933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.109375 = fieldNorm(doc=4933)
      0.14285715 = coord(1/7)
    
    Object
    Europe Online
  5. Lancaster, F.W.: Vocabulary control for information retrieval (1986) 0.02
    0.023172883 = product of:
      0.08110509 = sum of:
        0.05870303 = weight(_text_:management in 217) [ClassicSimilarity], result of:
          0.05870303 = score(doc=217,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.42132655 = fieldWeight in 217, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=217)
        0.02240206 = product of:
          0.04480412 = sum of:
            0.04480412 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
              0.04480412 = score(doc=217,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.30952093 = fieldWeight in 217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=217)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    Date
    22. 4.2007 10:07:51
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
  6. Langridge, D.W.: Classification: its kinds, systems, elements and application (1992) 0.02
    0.017362457 = product of:
      0.12153719 = sum of:
        0.12153719 = sum of:
          0.058174606 = weight(_text_:studies in 770) [ClassicSimilarity], result of:
            0.058174606 = score(doc=770,freq=2.0), product of:
              0.16494368 = queryWeight, product of:
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.041336425 = queryNorm
              0.35269377 = fieldWeight in 770, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.0625 = fieldNorm(doc=770)
          0.06336259 = weight(_text_:22 in 770) [ClassicSimilarity], result of:
            0.06336259 = score(doc=770,freq=4.0), product of:
              0.14475311 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041336425 = queryNorm
              0.4377287 = fieldWeight in 770, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=770)
      0.14285715 = coord(1/7)
    
    Date
    26. 7.2002 14:01:22
    Footnote
    Rez. in: Journal of documentation 49(1993) no.1, S.68-70. (A. Maltby); Journal of librarianship and information science 1993, S.108-109 (A.G. Curwen); Herald of library science 33(1994) nos.1/2, S.85 (P.N. Kaula); Knowledge organization 22(1995) no.1, S.45 (M.P. Satija)
    Series
    Topics in library and information studies
  7. Bates, M.J.: Where should the person stop and the information search interface start? (1990) 0.01
    0.011859803 = product of:
      0.08301862 = sum of:
        0.08301862 = weight(_text_:management in 155) [ClassicSimilarity], result of:
          0.08301862 = score(doc=155,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.5958457 = fieldWeight in 155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=155)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 26(1990), S.575-591
  8. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.01
    0.009130197 = product of:
      0.031955685 = sum of:
        0.020754656 = weight(_text_:management in 1767) [ClassicSimilarity], result of:
          0.020754656 = score(doc=1767,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.14896142 = fieldWeight in 1767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.01120103 = product of:
          0.02240206 = sum of:
            0.02240206 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.02240206 = score(doc=1767,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
  9. Walker, G.; Janes, J.: Online retrieval : a dialogue of theory and practice (1999) 0.01
    0.008894852 = product of:
      0.062263966 = sum of:
        0.062263966 = weight(_text_:management in 1875) [ClassicSimilarity], result of:
          0.062263966 = score(doc=1875,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.44688427 = fieldWeight in 1875, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.09375 = fieldNorm(doc=1875)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: International journal of information management 20(2000) no.3, S.243-244 (D. Bawden)
  10. Bowman, J.H.: Essential Dewey (2005) 0.01
    0.008244551 = product of:
      0.028855925 = sum of:
        0.017654896 = weight(_text_:case in 359) [ClassicSimilarity], result of:
          0.017654896 = score(doc=359,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.09714795 = fieldWeight in 359, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.015625 = fieldNorm(doc=359)
        0.01120103 = product of:
          0.02240206 = sum of:
            0.02240206 = weight(_text_:22 in 359) [ClassicSimilarity], result of:
              0.02240206 = score(doc=359,freq=8.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.15476047 = fieldWeight in 359, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=359)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    "The contents of the book cover: This book is intended as an introduction to the Dewey Decimal Classification, edition 22. It is not a substitute for it, and I assume that you have it, all four volumes of it, by you while reading the book. I have deliberately included only a short section an WebDewey. This is partly because WebDewey is likely to change more frequently than the printed version, but also because this book is intended to help you use the scheme regardless of the manifestation in which it appears. If you have a subscription to WebDewey and not the printed volumes you may be able to manage with that, but you may then find my references to volumes and page numbers baffling. All the examples and exercises are real; what is not real is the idea that you can classify something without seeing more than the title. However, there is nothing that I can do about this, and I have therefore tried to choose examples whose titles adequately express their subject-matter. Sometimes when you look at the 'answers' you may feel that you have been cheated, but I hope that this will be seldom. Two people deserve special thanks. My colleague Vanda Broughton has read drafts of the book and made many suggestions. Ross Trotter, chair of the CILIP Dewey Decimal Classification Committee, who knows more about Dewey than anyone in Britain today, has commented extensively an it and as far as possible has saved me from error, as well as suggesting many improvements. What errors remain are due to me alone. Thanks are also owed to OCLC Online Computer Library Center, for permission to reproduce some specimen pages of DDC 22. Excerpts from the Dewey Decimal Classification are taken from the Dewey Decimal Classification and Relative Index, Edition 22 which is Copyright 2003 OCLC Online Computer Library Center, Inc. DDC, Dewey, Dewey Decimal Classification and WebDewey are registered trademarks of OCLC Online Computer Library Center, Inc."
    Footnote
    "The title says it all. The book contains the essentials for a fundamental understanding of the complex world of the Dewey Decimal Classification. It is clearly written and captures the essence in a concise and readable style. Is it a coincidence that the mysteries of the Dewey Decimal System are revealed in ten easy chapters? The typography and layout are clear and easy to read and the perfect binding withstood heavy use. The exercises and answers are invaluable in illustrating the points of the several chapters. The book is well structured. Chapter 1 provides an "Introduction and background" to classification in general and Dewey in particular. Chapter 2 describes the "Outline of the scheme" and the conventions in the schedules and tables. Chapter 3 covers "Simple subjects" and introduces the first of the exercises. Chapters 4 and 5 describe "Number-building" with "standard subdivisions" in the former and "other methods" in the latter. Chapter 6 provides an excellent description of "Preference order" and Chapter 7 deals with "Exceptions and options." Chapter 8 "Special subjects," while no means exhaustive, gives a thorough analysis of problems with particular parts of the schedules from "100 Philosophy" to "910 Geography" with a particular discussion of "'Persons treatment"' and "Optional treatment of biography." Chapter 9 treats "Compound subjects." Chapter 10 briefly introduces WebDewey and provides the URL for the Web Dewey User Guide http://www.oclc.org/support/documentation/dewey/ webdewey_userguide/; the section for exercises says: "You are welcome to try using WebDewey an the exercises in any of the preceding chapters." Chapters 6 and 7 are invaluable at clarifying the options and bases for choice when a work is multifaceted or is susceptible of classification under different Dewey Codes. The recommendation "... not to adopt options, but use the scheme as instructed" (p. 71) is clearly sound. As is, "What is vital, of course, is that you keep a record of the decisions you make and to stick to them. Any option Chosen must be used consistently, and not the whim of the individual classifier" (p. 71). The book was first published in the UK and the British overtones, which may seem quite charming to a Canadian, may be more difficult for readers from the United States. The correction of Dewey's spelling of Labor to Labo [u] r (p. 54) elicited a smile for the championing of lost causes and some relief that we do not have to cope with 'simplified speling.' The down-to-earth opinions of the author, which usually agree with those of the reviewer, add savour to the text and enliven what might otherwise have been a tedious text indeed. However, in the case of (p. 82):
    Object
    DDC-22
  11. Lancaster, F.W.; Warner, A.J.: Information retrieval today (1993) 0.01
    0.007412377 = product of:
      0.051886637 = sum of:
        0.051886637 = weight(_text_:management in 4607) [ClassicSimilarity], result of:
          0.051886637 = score(doc=4607,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.37240356 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=4607)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: Information processing and management 30(1994) no.4, S.581-582 (L. Schamber); Journal of documentation 51(1995) no.1, S.76-77 (B. Frohmann)
  12. Rijsbergen, C.J. van: Information retrieval (1979) 0.01
    0.0073378794 = product of:
      0.051365152 = sum of:
        0.051365152 = weight(_text_:management in 856) [ClassicSimilarity], result of:
          0.051365152 = score(doc=856,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.36866072 = fieldWeight in 856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=856)
      0.14285715 = coord(1/7)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
  13. Bawden, D.; Robinson, L.: ¬An introduction to information science (2012) 0.01
    0.0073378794 = product of:
      0.051365152 = sum of:
        0.051365152 = weight(_text_:management in 4966) [ClassicSimilarity], result of:
          0.051365152 = score(doc=4966,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.36866072 = fieldWeight in 4966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4966)
      0.14285715 = coord(1/7)
    
    Abstract
    Landmark textbook taking a whole subject approach to information science as a discipline. The authors' expert narratives guides you through each of the essential components of information science, offering a concise introduction an expertly chosen readings and resources. This is the definitve science textbook for students of this subject, and of information and knowledge management, librarianship, archives and records management worldwide.
  14. Rowley, J.E.; Farrow, J.: Organizing knowledge : an introduction to managing access to information (2000) 0.01
    0.006419307 = product of:
      0.04493515 = sum of:
        0.04493515 = weight(_text_:management in 2463) [ClassicSimilarity], result of:
          0.04493515 = score(doc=2463,freq=6.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.32251096 = fieldWeight in 2463, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
      0.14285715 = coord(1/7)
    
    Abstract
    For its third edition this standard text on knowledge organization and retrieval has been extensively revised and restructured to accommodate the increased significance of electronic information resources. With the help of many new sections on topics such as information retrieval via the Web, metadata and managing information retrieval systems, the book explains principles relating to hybrid print-based and electronic, networked environments experienced by today's users. Part I, Information Basics, explores the nature of information and knowledge and their incorporation into documents. Part II, Records, focuses specifically on electronic databases for accessing print or electronic media. Part III, Access, explores the range of tools for accessing information resources and covers interfaces, indexing and searching languages, classification, thesauri and catalogue and bibliographic access points. Finally, Part IV, Systems, describes the contexts through which knowledge can be organized and retrieved, including OPACs, the Internet, CD-ROMs, online search services and printed indexes and documents. This book is a comprehensive and accessible introduction to knowledge organization for both undergraduate and postgraduate students of information management and information systems
    LCSH
    Information storage and retrieval systems / Management
    Subject
    Information storage and retrieval systems / Management
  15. Kowalski, G.J.; Maybury, M.T.: Information storage and retrieval systems : theory and implemetation (2000) 0.01
    0.006289611 = product of:
      0.044027276 = sum of:
        0.044027276 = weight(_text_:management in 6727) [ClassicSimilarity], result of:
          0.044027276 = score(doc=6727,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.31599492 = fieldWeight in 6727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=6727)
      0.14285715 = coord(1/7)
    
    LCSH
    Database management
    Subject
    Database management
  16. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2001) 0.01
    0.006289611 = product of:
      0.044027276 = sum of:
        0.044027276 = weight(_text_:management in 1655) [ClassicSimilarity], result of:
          0.044027276 = score(doc=1655,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.31599492 = fieldWeight in 1655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1655)
      0.14285715 = coord(1/7)
    
    Classification
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
    RVK
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
  17. Gebhardt, F.: Dokumentationssysteme (1981) 0.01
    0.006289611 = product of:
      0.044027276 = sum of:
        0.044027276 = weight(_text_:management in 1560) [ClassicSimilarity], result of:
          0.044027276 = score(doc=1560,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.31599492 = fieldWeight in 1560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1560)
      0.14285715 = coord(1/7)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
  18. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.0062707383 = product of:
      0.021947583 = sum of:
        0.014675758 = weight(_text_:management in 2924) [ClassicSimilarity], result of:
          0.014675758 = score(doc=2924,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.10533164 = fieldWeight in 2924, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
        0.007271826 = product of:
          0.014543652 = sum of:
            0.014543652 = weight(_text_:studies in 2924) [ClassicSimilarity], result of:
              0.014543652 = score(doc=2924,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.08817344 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.
    Footnote
    Rez. in: Mitt. VÖB 60(2007) H.1, S.98-101 (O. Oberhauser): "Die Autorin von Essential thesaurus construction (and essential taxonomy construction, so der implizite Untertitel, vgl. S. 1) ist durch ihre Lehrtätigkeit an der bekannten School of Library, Archive and Information Studies des University College London und durch ihre bisherigen Publikationen auf den Gebieten (Facetten-)Klassifikation und Thesaurus fachlich einschlägig ausgewiesen. Nach Essential classification liegt nun ihr Thesaurus-Lehrbuch vor, mit rund 200 Seiten Text und knapp 100 Seiten Anhang ein handliches Werk, das seine Genese zum Grossteil dem Lehrbetrieb verdankt, wie auch dem kurzen Einleitungskapitel zu entnehmen ist. Das Buch ist der Schule von Jean Aitchison et al. verpflichtet und wendet sich an "the indexer" im weitesten Sinn, d.h. an alle Personen, die ein strukturiertes, kontrolliertes Fachvokabular für die Zwecke der sachlichen Erschliessung und Suche erstellen wollen bzw. müssen. Es möchte dieser Zielgruppe das nötige methodische Rüstzeug für eine solche Aufgabe vermitteln, was einschliesslich der Einleitung und der Schlussbemerkungen in zwanzig Kapiteln geschieht - eine ansprechende Strukturierung, die ein wohldosiertes Durcharbeiten möglich macht. Zu letzterem tragen auch die von der Autorin immer wieder gestellten Übungsaufgaben bei (Lösungen jeweils am Kapitelende). Zu Beginn der Darstellung wird der "information retrieval thesaurus" von dem (zumindest im angelsächsischen Raum) weit öfter mit dem Thesaurusbegriff assoziierten "reference thesaurus" abgegrenzt, einem nach begrifflicher Ähnlichkeit angeordneten Synonymenwörterbuch, das gerne als Mittel zur stilistischen Verbesserung beim Abfassen von (wissenschaftlichen) Arbeiten verwendet wird. Ohne noch ins Detail zu gehen, werden optische Erscheinungsform und Anwendungsgebiete von Thesauren vorgestellt, der Thesaurus als postkoordinierte Indexierungssprache erläutert und seine Nähe zu facettierten Klassifikationssystemen erwähnt. In der Folge stellt Broughton die systematisch organisierten Systeme (Klassifikation/ Taxonomie, Begriffs-/Themendiagramme, Ontologien) den alphabetisch angeordneten, wortbasierten (Schlagwortlisten, thesaurusartige Schlagwortsysteme und Thesauren im eigentlichen Sinn) gegenüber, was dem Leser weitere Einordnungshilfen schafft. Die Anwendungsmöglichkeiten von Thesauren als Mittel der Erschliessung (auch als Quelle für Metadatenangaben bei elektronischen bzw. Web-Dokumenten) und der Recherche (Suchformulierung, Anfrageerweiterung, Browsing und Navigieren) kommen ebenso zur Sprache wie die bei der Verwendung natürlichsprachiger Indexierungssysteme auftretenden Probleme. Mit Beispielen wird ausdrücklich auf die mehr oder weniger starke fachliche Spezialisierung der meisten dieser Vokabularien hingewiesen, wobei auch Informationsquellen über Thesauren (z.B. www.taxonomywarehouse.com) sowie Thesauren für nicht-textuelle Ressourcen kurz angerissen werden.
  19. Henzler, R.G.: Information und Dokumentation : Sammeln, Speichern und Wiedergewinnen von Fachinformation in Datenbanken (1992) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 4839) [ClassicSimilarity], result of:
          0.04150931 = score(doc=4839,freq=8.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 4839, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=4839)
      0.14285715 = coord(1/7)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
  20. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.01
    0.005600515 = product of:
      0.039203603 = sum of:
        0.039203603 = product of:
          0.078407206 = sum of:
            0.078407206 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
              0.078407206 = score(doc=3247,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.5416616 = fieldWeight in 3247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3247)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Object
    DDC-22

Years

Languages

  • e 37
  • d 16

Types

  • m 48
  • a 4
  • s 3
  • el 1
  • More… Less…

Subjects

Classifications