Search (126 results, page 1 of 7)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.15
    0.15146695 = product of:
      0.24234712 = sum of:
        0.061760157 = weight(_text_:world in 3346) [ClassicSimilarity], result of:
          0.061760157 = score(doc=3346,freq=10.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.37983608 = fieldWeight in 3346, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.08990064 = weight(_text_:wide in 3346) [ClassicSimilarity], result of:
          0.08990064 = score(doc=3346,freq=12.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.47964367 = fieldWeight in 3346, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.056317843 = weight(_text_:web in 3346) [ClassicSimilarity], result of:
          0.056317843 = score(doc=3346,freq=16.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.4079388 = fieldWeight in 3346, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.015243012 = weight(_text_:information in 3346) [ClassicSimilarity], result of:
          0.015243012 = score(doc=3346,freq=14.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.20526241 = fieldWeight in 3346, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.019125482 = product of:
          0.038250964 = sum of:
            0.038250964 = weight(_text_:retrieval in 3346) [ClassicSimilarity], result of:
              0.038250964 = score(doc=3346,freq=10.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.29892567 = fieldWeight in 3346, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
    LCSH
    World Wide Web / Computer programs
    Web search engines
    RSWK
    Suchmaschine / World Wide Web / Information Retrieval
    Subject
    Suchmaschine / World Wide Web / Information Retrieval
    World Wide Web / Computer programs
    Web search engines
  2. Stolpmann, M.: Internet & WWW für Studenten : WWW, FTP, E-Mail und andere Dienste (1997) 0.11
    0.11167841 = product of:
      0.2978091 = sum of:
        0.09765138 = weight(_text_:world in 3438) [ClassicSimilarity], result of:
          0.09765138 = score(doc=3438,freq=4.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.60057354 = fieldWeight in 3438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.078125 = fieldNorm(doc=3438)
        0.1297604 = weight(_text_:wide in 3438) [ClassicSimilarity], result of:
          0.1297604 = score(doc=3438,freq=4.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.69230604 = fieldWeight in 3438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=3438)
        0.07039731 = weight(_text_:web in 3438) [ClassicSimilarity], result of:
          0.07039731 = score(doc=3438,freq=4.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.5099235 = fieldWeight in 3438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3438)
      0.375 = coord(3/8)
    
    RSWK
    World wide web / Studium / Ratgeber (213)
    Subject
    World wide web / Studium / Ratgeber (213)
  3. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.08
    0.07766569 = product of:
      0.15533137 = sum of:
        0.034524977 = weight(_text_:world in 5773) [ClassicSimilarity], result of:
          0.034524977 = score(doc=5773,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.21233483 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.045877226 = weight(_text_:wide in 5773) [ClassicSimilarity], result of:
          0.045877226 = score(doc=5773,freq=2.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.24476713 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.024889207 = weight(_text_:web in 5773) [ClassicSimilarity], result of:
          0.024889207 = score(doc=5773,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.18028519 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.050039962 = sum of:
          0.02138294 = weight(_text_:retrieval in 5773) [ClassicSimilarity], result of:
            0.02138294 = score(doc=5773,freq=2.0), product of:
              0.12796146 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.042302497 = queryNorm
              0.16710453 = fieldWeight in 5773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5773)
          0.028657023 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
            0.028657023 = score(doc=5773,freq=2.0), product of:
              0.14813614 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042302497 = queryNorm
              0.19345059 = fieldWeight in 5773, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5773)
      0.5 = coord(4/8)
    
    Abstract
    Suchmaschinen im World Wide Web wird nachgesagt, dass sie - insbesondere im Vergleich zur Retrievalsoftware kommerzieller Online-Archive suboptimale Methoden und Werkzeuge einsetzen. Elaborierte befehlsorientierte Retrievalsysteme sind vom Laien gar nicht und vom Professional nur dann zu bedienen, wenn man stets damit arbeitet. Die Suchsysteme einiger "independents", also isolierter Informationsproduzenten im Internet, zeichnen sich durch einen Minimalismus aus, der an den Befehlsumfang anfangs der 70er Jahre erinnert. Retrievalsoftware in Intranets, wenn sie denn überhaupt benutzt wird, setzt fast ausnahmslos auf automatische Methoden von Indexierung und Retrieval und ignoriert dabei nahezu vollständig dokumentarisches Know how. Suchmaschinen bzw. Retrievalsysteme - wir wollen beide Bezeichnungen synonym verwenden - bereiten demnach, egal wo sie vorkommen, Schwierigkeiten. An ihrer Qualität wird gezweifelt. Aber was heißt überhaupt: Qualität von Suchmaschinen? Was zeichnet ein gutes Retrievalsystem aus? Und was fehlt einem schlechten? Wir wollen eine Liste von Kriterien entwickeln, die für gutes Suchen (und Finden!) wesentlich sind. Es geht also ausschließlich um Quantität und Qualität der Suchoptionen, nicht um weitere Leistungsindikatoren wie Geschwindigkeit oder ergonomische Benutzerschnittstellen. Stillschweigend vorausgesetzt wirdjedoch der Abschied von ausschließlich befehlsorientierten Systemen, d.h. wir unterstellen Bildschirmgestaltungen, die die Befehle intuitiv einleuchtend darstellen. Unsere Checkliste enthält nur solche Optionen, die entweder (bei irgendwelchen Systemen) schon im Einsatz sind (und wiederholt damit zum Teil Altbekanntes) oder deren technische Realisierungsmöglichkeit bereits in experimentellen Umgebungen aufgezeigt worden ist. insofern ist die Liste eine Minimalforderung an Retrievalsysteme, die durchaus erweiterungsfähig ist. Gegliedert wird der Kriterienkatalog nach (1.) den Basisfunktionen zur Suche singulärer Datensätze, (2.) den informetrischen Funktionen zur Charakterisierunggewisser Nachweismengen sowie (3.) den Kriterien zur Mächtigkeit automatischer Indexierung und natürlichsprachiger Suche
    Source
    Password. 2000, H.5, S.22-31
  4. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.07
    0.07303903 = product of:
      0.14607807 = sum of:
        0.024412844 = weight(_text_:world in 468) [ClassicSimilarity], result of:
          0.024412844 = score(doc=468,freq=4.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.15014338 = fieldWeight in 468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.039730847 = weight(_text_:wide in 468) [ClassicSimilarity], result of:
          0.039730847 = score(doc=468,freq=6.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.21197456 = fieldWeight in 468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.07569757 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.07569757 = score(doc=468,freq=74.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.548316 = fieldWeight in 468, product of:
              8.602325 = tf(freq=74.0), with freq of:
                74.0 = termFreq=74.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.006236809 = weight(_text_:information in 468) [ClassicSimilarity], result of:
          0.006236809 = score(doc=468,freq=6.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.083984874 = fieldWeight in 468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.5 = coord(4/8)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    LCSH
    Semantic Web
    Series
    Cooperative information systems
    Subject
    Semantic Web
    Theme
    Semantic Web
  5. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.05
    0.051790033 = product of:
      0.103580065 = sum of:
        0.034524977 = weight(_text_:world in 2050) [ClassicSimilarity], result of:
          0.034524977 = score(doc=2050,freq=8.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.21233483 = fieldWeight in 2050, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.051310413 = weight(_text_:web in 2050) [ClassicSimilarity], result of:
          0.051310413 = score(doc=2050,freq=34.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.37166741 = fieldWeight in 2050, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.010184665 = weight(_text_:information in 2050) [ClassicSimilarity], result of:
          0.010184665 = score(doc=2050,freq=16.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.13714671 = fieldWeight in 2050, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
        0.00756001 = product of:
          0.01512002 = sum of:
            0.01512002 = weight(_text_:retrieval in 2050) [ClassicSimilarity], result of:
              0.01512002 = score(doc=2050,freq=4.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.11816074 = fieldWeight in 2050, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  6. Bekavac, B.: Suchverfahren und Suchdienste des World Wide Web (1996) 0.05
    0.047381133 = product of:
      0.12634969 = sum of:
        0.04142997 = weight(_text_:world in 4803) [ClassicSimilarity], result of:
          0.04142997 = score(doc=4803,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.25480178 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4803)
        0.05505267 = weight(_text_:wide in 4803) [ClassicSimilarity], result of:
          0.05505267 = score(doc=4803,freq=2.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.29372054 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4803)
        0.029867046 = weight(_text_:web in 4803) [ClassicSimilarity], result of:
          0.029867046 = score(doc=4803,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.21634221 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4803)
      0.375 = coord(3/8)
    
  7. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.05
    0.04670897 = product of:
      0.09341794 = sum of:
        0.020714985 = weight(_text_:world in 729) [ClassicSimilarity], result of:
          0.020714985 = score(doc=729,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.12740089 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.027526336 = weight(_text_:wide in 729) [ClassicSimilarity], result of:
          0.027526336 = score(doc=729,freq=2.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.14686027 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.036579516 = weight(_text_:web in 729) [ClassicSimilarity], result of:
          0.036579516 = score(doc=729,freq=12.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.26496404 = fieldWeight in 729, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.008597107 = product of:
          0.017194213 = sum of:
            0.017194213 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.017194213 = score(doc=729,freq=2.0), product of:
                0.14813614 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042302497 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In diesem Buch sollen die grundlegenden Techniken zur Erstellung, Anwendung und nicht zuletzt Darstellung von XML-Dokumenten erklärt und demonstriert werden. Die wichtigste und vornehmste Aufgabe dieses Buches ist es jedoch, die Grundlagen von XML, wie sie vom World Wide Web Consortium (W3C) festgelegt sind, darzustellen. Das W3C hat nicht nur die Entwicklung von XML initiiert und ist die zuständige Organisation für alle XML-Standards, es werden auch weiterhin XML-Spezifikationen vom W3C entwickelt. Auch wenn immer mehr Vorschläge für neue XML-basierte Techniken aus dem weiteren Umfeld der an XML Interessierten kommen, so spielt doch weiterhin das W3C die zentrale und wichtigste Rolle für die Entwicklung von XML. Der Schwerpunkt dieses Buches liegt darin, zu lernen, wie man XML als tragende Technologie in echten Alltags-Anwendungen verwendet. Wir wollen Ihnen gute Design-Techniken vorstellen und demonstrieren, wie man XML-fähige Anwendungen mit Applikationen für das WWW oder mit Datenbanksystemen verknüpft. Wir wollen die Grenzen und Möglichkeiten von XML ausloten und eine Vorausschau auf einige "nascent"-Technologien werfen. Egal ob Ihre Anforderungen sich mehr an dem Austausch von Daten orientieren oder bei der visuellen Gestaltung liegen, dieses Buch behandelt alle relevanten Techniken. jedes Kapitel enthält ein Anwendungsbeispiel. Da XML eine Plattform-neutrale Technologie ist, werden in den Beispielen eine breite Palette von Sprachen, Parsern und Servern behandelt. Jede der vorgestellten Techniken und Methoden ist auf allen Plattformen und Betriebssystemen relevant. Auf diese Weise erhalten Sie wichtige Einsichten durch diese Beispiele, auch wenn die konkrete Implementierung nicht auf dem von Ihnen bevorzugten System durchgeführt wurde.
    Dieses Buch wendet sich an alle, die Anwendungen auf der Basis von XML entwickeln wollen. Designer von Websites können neue Techniken erlernen, wie sie ihre Sites auf ein neues technisches Niveau heben können. Entwickler komplexerer Software-Systeme und Programmierer können lernen, wie XML in ihr System passt und wie es helfen kann, Anwendungen zu integrieren. XML-Anwendungen sind von ihrer Natur her verteilt und im Allgemeinen Web-orientiert. Dieses Buch behandelt nicht verteilte Systeme oder die Entwicklung von Web-Anwendungen, sie brauchen also keine tieferen Kenntnisse auf diesen Gebieten. Ein allgemeines Verständnis für verteilte Architekturen und Funktionsweisen des Web wird vollauf genügen. Die Beispiele in diesem Buch verwenden eine Reihe von Programmiersprachen und Technologien. Ein wichtiger Bestandteil der Attraktivität von XML ist seine Plattformunabhängigkeit und Neutralität gegenüber Programmiersprachen. Sollten Sie schon Web-Anwendungen entwickelt haben, stehen die Chancen gut, dass Sie einige Beispiele in Ihrer bevorzugten Sprache finden werden. Lassen Sie sich nicht entmutigen, wenn Sie kein Beispiel speziell für Ihr System finden sollten. Tools für die Arbeit mit XML gibt es für Perl, C++, Java, JavaScript und jede COM-fähige Sprache. Der Internet Explorer (ab Version 5.0) hat bereits einige Möglichkeiten zur Verarbeitung von XML-Dokumenten eingebaut. Auch der Mozilla-Browser (der Open-Source-Nachfolger des Netscape Navigators) bekommt ähnliche Fähigkeiten. XML-Tools tauchen auch zunehmend in großen relationalen Datenbanksystemen auf, genau wie auf Web- und Applikations-Servern. Sollte Ihr System nicht in diesem Buch behandelt werden, lernen Sie die Grundlagen und machen Sie sich mit den vorgestellten Techniken aus den Beispielen vertraut.
    Date
    22. 6.2005 15:12:11
  8. Broughton, V.: Essential classification (2004) 0.04
    0.04158792 = product of:
      0.06654067 = sum of:
        0.023919607 = weight(_text_:world in 2824) [ClassicSimilarity], result of:
          0.023919607 = score(doc=2824,freq=6.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.14710988 = fieldWeight in 2824, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.01835089 = weight(_text_:wide in 2824) [ClassicSimilarity], result of:
          0.01835089 = score(doc=2824,freq=2.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.09790685 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.009955682 = weight(_text_:web in 2824) [ClassicSimilarity], result of:
          0.009955682 = score(doc=2824,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.07211407 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.005761317 = weight(_text_:information in 2824) [ClassicSimilarity], result of:
          0.005761317 = score(doc=2824,freq=8.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.0775819 = fieldWeight in 2824, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.008553175 = product of:
          0.01710635 = sum of:
            0.01710635 = weight(_text_:retrieval in 2824) [ClassicSimilarity], result of:
              0.01710635 = score(doc=2824,freq=8.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.13368362 = fieldWeight in 2824, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    Classification is a crucial skill for all information workers involved in organizing collections, but it is a difficult concept to grasp - and is even more difficult to put into practice. Essential Classification offers full guidance an how to go about classifying a document from scratch. This much-needed text leads the novice classifier step by step through the basics of subject cataloguing, with an emphasis an practical document analysis and classification. It deals with fundamental questions of the purpose of classification in different situations, and the needs and expectations of end users. The novice is introduced to the ways in which document content can be assessed, and how this can best be expressed for translation into the language of specific indexing and classification systems. The characteristics of the major general schemes of classification are discussed, together with their suitability for different classification needs.
    Footnote
    Rez. in: KO 32(2005) no.1, S.47-49 (M. Hudon): "Vanda Broughton's Essential Classification is the most recent addition to a very small set of classification textbooks published over the past few years. The book's 21 chapters are based very closely an the cataloguing and classification module at the School of Library, Archive, and Information studies at University College, London. The author's main objective is clear: this is "first and foremost a book about how to classify. The emphasis throughout is an the activity of classification rather than the theory, the practical problems of the organization of collections, and the needs of the users" (p. 1). This is not a theoretical work, but a basic course in classification and classification scheme application. For this reviewer, who also teaches "Classification 101," this is also a fascinating peek into how a colleague organizes content and structures her course. "Classification is everywhere" (p. 1): the first sentence of this book is also one of the first statements in my own course, and Professor Broughton's metaphors - the supermarket, canned peas, flowers, etc. - are those that are used by our colleagues around the world. The combination of tone, writing style and content display are reader-friendly; they are in fact what make this book remarkable and what distinguishes it from more "formal" textbooks, such as The Organization of Information, the superb text written and recently updated (2004) by Professor Arlene Taylor (2nd ed. Westport, Conn.: Libraries Unlimited, 2004). Reading Essential Classification, at times, feels like being in a classroom, facing a teacher who assures you that "you don't need to worry about this at this stage" (p. 104), and reassures you that, although you now speed a long time looking for things, "you will soon speed up when you get to know the scheme better" (p. 137). This teacher uses redundancy in a productive fashion, and she is not afraid to express her own opinions ("I think that if these concepts are helpful they may be used" (p. 245); "It's annoying that LCC doesn't provide clearer instructions, but if you keep your head and take them one step at a time [i.e. the tables] they're fairly straightforward" (p. 174)). Chapters 1 to 7 present the essential theoretical concepts relating to knowledge organization and to bibliographic classification. The author is adept at making and explaining distinctions: known-item retrieval versus subject retrieval, personal versus public/shared/official classification systems, scientific versus folk classification systems, object versus aspect classification systems, semantic versus syntactic relationships, and so on. Chapters 8 and 9 discuss the practice of classification, through content analysis and subject description. A short discussion of difficult subjects, namely the treatment of unique concepts (persons, places, etc.) as subjects seems a little advanced for a beginners' class.
    In Chapter 10, "Controlled indexing languages," Professor Broughton states that a classification scheme is truly a language "since it permits communication and the exchange of information" (p. 89), a Statement with which this reviewer wholly agrees. Chapter 11, however, "Word-based approaches to retrieval," moves us to a different field altogether, offering only a narrow view of the whole world of controlled indexing languages such as thesauri, and presenting disconnected discussions of alphabetical filing, form and structure of subject headings, modern developments in alphabetical subject indexing, etc. Chapters 12 and 13 focus an the Library of Congress Subject Headings (LCSH), without even a passing reference to existing subject headings lists in other languages (French RAMEAU, German SWK, etc.). If it is not surprising to see a section on subject headings in a book on classification, the two subjects being taught together in most library schools, the location of this section in the middle of this particular book is more difficult to understand. Chapter 14 brings the reader back to classification, for a discussion of essentials of classification scheme application. The following five chapters present in turn each one of the three major and currently used bibliographic classification schemes, in order of increasing complexity and difficulty of application. The Library of Congress Classification (LCC), the easiest to use, is covered in chapters 15 and 16. The Dewey Decimal Classification (DDC) deserves only a one-chapter treatment (Chapter 17), while the functionalities of the Universal Decimal Classification (UDC), which Professor Broughton knows extremely well, are described in chapters 18 and 19. Chapter 20 is a general discussion of faceted classification, on par with the first seven chapters for its theoretical content. Chapter 21, an interesting last chapter on managing classification, addresses down-to-earth matters such as the cost of classification, the need for re-classification, advantages and disadvantages of using print versions or e-versions of classification schemes, choice of classification scheme, general versus special scheme. But although the questions are interesting, the chapter provides only a very general overview of what appropriate answers might be. To facilitate reading and learning, summaries are strategically located at various places in the text, and always before switching to a related subject. Professor Broughton's choice of examples is always interesting, and sometimes even entertaining (see for example "Inside out: A brief history of underwear" (p. 71)). With many examples, however, and particularly those that appear in the five chapters an classification scheme applications, the novice reader would have benefited from more detailed explanations. On page 221, for example, "The history and social influence of the potato" results in this analysis of concepts: Potato - Sociology, and in the UDC class number: 635.21:316. What happened to the "history" aspect? Some examples are not very convincing: in Animals RT Reproduction and Art RT Reproduction (p. 102), the associative relationship is not appropriate as it is used to distinguish homographs and would do nothing to help either the indexer or the user at the retrieval stage.
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  9. Chowdhury, G.G.: Introduction to modern information retrieval (1999) 0.04
    0.040083565 = product of:
      0.16033426 = sum of:
        0.029936682 = weight(_text_:information in 4902) [ClassicSimilarity], result of:
          0.029936682 = score(doc=4902,freq=24.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.40312737 = fieldWeight in 4902, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4902)
        0.13039757 = sum of:
          0.09600915 = weight(_text_:retrieval in 4902) [ClassicSimilarity], result of:
            0.09600915 = score(doc=4902,freq=28.0), product of:
              0.12796146 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.042302497 = queryNorm
              0.7502974 = fieldWeight in 4902, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=4902)
          0.034388427 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
            0.034388427 = score(doc=4902,freq=2.0), product of:
              0.14813614 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042302497 = queryNorm
              0.23214069 = fieldWeight in 4902, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4902)
      0.25 = coord(2/8)
    
    Content
    Enthält die Kapitel: 1. Basic concepts of information retrieval systems, 2. Database technology, 3. Bibliographic formats, 4. Subject analysis and representation, 5. Automatic indexing and file organization, 6. Vocabulary control, 7. Abstracts and abstracting, 8. Searching and retrieval, 9. Users of information retrieval, 10. Evaluation of information retrieval systems, 11. Evaluation experiments, 12. Online information retrieval, 13. CD-ROM information retrieval, 14. Trends in CD-ROM and online information retrieval, 15. Multimedia information retrieval, 16. Hypertext and hypermedia systems, 17. Intelligent information retrieval, 18. Natural language processing and information retrieval, 19. Natural language interfaces, 20. Natural language text processing and retrieval systems, 21. Problems and prospects of natural language processing systems, 22. The Internet and information retrieval, 23. Trends in information retrieval.
  10. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (1998) 0.04
    0.038611393 = product of:
      0.102963716 = sum of:
        0.03484489 = weight(_text_:web in 239) [ClassicSimilarity], result of:
          0.03484489 = score(doc=239,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.25239927 = fieldWeight in 239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=239)
        0.028517064 = weight(_text_:information in 239) [ClassicSimilarity], result of:
          0.028517064 = score(doc=239,freq=16.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.3840108 = fieldWeight in 239, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=239)
        0.039601758 = product of:
          0.079203516 = sum of:
            0.079203516 = weight(_text_:retrieval in 239) [ClassicSimilarity], result of:
              0.079203516 = score(doc=239,freq=14.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.61896384 = fieldWeight in 239, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=239)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Content
    Teil 1: Grundlagen des Information Retrieval: Schwerpunkte des Information Retrieval mit Relevanz für die praktische Recherchedurchführung: Arbeitsschritte einer Recherche, Voraussetzungen für Online-Recherchen, Überblick über Arten von Datenbanken und über Hosts, Benutzerhilfen, Softwaretools, Retrievalsprachen und Kosten; Teil 2: Methoden des Information Retrieval: Einführung in die Methoden des Information Retrieval anhand ausgewählter Beispiele zu Retrievalsprachen, windows-basierten Retrievaltools und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen
    RSWK
    Information retrieval / Einführung
    Series
    Materialien zur Information und Dokumentation; Bd.5
    Subject
    Information retrieval / Einführung
  11. Lancaster, F.W.: Vocabulary control for information retrieval (1986) 0.04
    0.037644435 = product of:
      0.15057774 = sum of:
        0.028224573 = weight(_text_:information in 217) [ClassicSimilarity], result of:
          0.028224573 = score(doc=217,freq=12.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.38007212 = fieldWeight in 217, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=217)
        0.12235317 = sum of:
          0.07650193 = weight(_text_:retrieval in 217) [ClassicSimilarity], result of:
            0.07650193 = score(doc=217,freq=10.0), product of:
              0.12796146 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.042302497 = queryNorm
              0.59785134 = fieldWeight in 217, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=217)
          0.045851234 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
            0.045851234 = score(doc=217,freq=2.0), product of:
              0.14813614 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042302497 = queryNorm
              0.30952093 = fieldWeight in 217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=217)
      0.25 = coord(2/8)
    
    Date
    22. 4.2007 10:07:51
    Imprint
    Arlington, VA : Information Resources Pr.
    LCSH
    Information retrieval
    RSWK
    Information Retrieval / Terminologische Kontrolle
    Subject
    Information Retrieval / Terminologische Kontrolle
    Information retrieval
  12. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2001) 0.04
    0.03588175 = product of:
      0.09568466 = sum of:
        0.029867046 = weight(_text_:web in 1655) [ClassicSimilarity], result of:
          0.029867046 = score(doc=1655,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.21634221 = fieldWeight in 1655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1655)
        0.027328325 = weight(_text_:information in 1655) [ClassicSimilarity], result of:
          0.027328325 = score(doc=1655,freq=20.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.36800325 = fieldWeight in 1655, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1655)
        0.03848929 = product of:
          0.07697858 = sum of:
            0.07697858 = weight(_text_:retrieval in 1655) [ClassicSimilarity], result of:
              0.07697858 = score(doc=1655,freq=18.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.60157627 = fieldWeight in 1655, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1655)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Content
    Teil 1: Grundlagen des Information Retrieval: Schwerpunkte des Information Retrieval mit Relevanz für die praktische Recherchedurchführung: Arbeitsschritte einer Recherche, Voraussetzungen für Online-Recherchen, Überblick über Arten von Datenbanken und über Hosts, Benutzerhilfen, Softwaretools, Retrievalsprachen und Kosten; Teil 2: Methoden des Information Retrieval: Einführung in die Methoden des Information Retrieval anhand ausgewählter Beispiele zu Retrievalsprachen, windows-basierten Retrievaltools und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen
    LCSH
    Information Retrieval / Einführung (SBPK)
    RSWK
    Information Retrieval
    Series
    Materialien zur Information und Dokumentation; Bd.5
    Subject
    Information Retrieval
    Information Retrieval / Einführung (SBPK)
  13. Understanding metadata (2004) 0.04
    0.03542289 = product of:
      0.094461046 = sum of:
        0.055239964 = weight(_text_:world in 2686) [ClassicSimilarity], result of:
          0.055239964 = score(doc=2686,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.33973572 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.016295465 = weight(_text_:information in 2686) [ClassicSimilarity], result of:
          0.016295465 = score(doc=2686,freq=4.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.21943474 = fieldWeight in 2686, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.022925617 = product of:
          0.045851234 = sum of:
            0.045851234 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.045851234 = score(doc=2686,freq=2.0), product of:
                0.14813614 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042302497 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
    Imprint
    Washington, DC : National Information Standards Organization
  14. McIlwaine, I.C.: ¬The Universal Decimal Classification : a guide to its use (2000) 0.03
    0.033970077 = product of:
      0.09058687 = sum of:
        0.034524977 = weight(_text_:world in 161) [ClassicSimilarity], result of:
          0.034524977 = score(doc=161,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.21233483 = fieldWeight in 161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
        0.045877226 = weight(_text_:wide in 161) [ClassicSimilarity], result of:
          0.045877226 = score(doc=161,freq=2.0), product of:
            0.18743214 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042302497 = queryNorm
            0.24476713 = fieldWeight in 161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
        0.010184665 = weight(_text_:information in 161) [ClassicSimilarity], result of:
          0.010184665 = score(doc=161,freq=4.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.13714671 = fieldWeight in 161, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
      0.375 = coord(3/8)
    
    Abstract
    This book is an extension and total revision of the author's earlier Guide to the use of UDC. The original was written in 1993 and in the intervening years much has happened with the classification. In particular, a much more rigorous approach has been undertaken in revision to ensure that the scheme is able to handle the requirements of a networked world. The book outlines the history and development of the Universal Decimal Classification, provides practical hints on its application and works through all the auxiliary and main tables highlighting aspects that need to be noted in applying the scheme. It also provides guidance on the use of the Master Reference File and discusses the ways in which the classification is used in the 21st century and its suitability as an aid to subject description in tagging metadata and consequently for application on the Internet. It is intended as a source for information about the scheme, for practical usage by classifiers in their daily work and as a guide to the student learning how to apply the classification. It is amply provided with examples to illustrate the many ways in which the scheme can be applied and will be a useful source for a wide range of information workers
  15. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (2005) 0.03
    0.03214816 = product of:
      0.08572842 = sum of:
        0.028158922 = weight(_text_:web in 591) [ClassicSimilarity], result of:
          0.028158922 = score(doc=591,freq=4.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.2039694 = fieldWeight in 591, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=591)
        0.024443198 = weight(_text_:information in 591) [ClassicSimilarity], result of:
          0.024443198 = score(doc=591,freq=36.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.3291521 = fieldWeight in 591, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=591)
        0.033126306 = product of:
          0.06625261 = sum of:
            0.06625261 = weight(_text_:retrieval in 591) [ClassicSimilarity], result of:
              0.06625261 = score(doc=591,freq=30.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.51775444 = fieldWeight in 591, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=591)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Im ersten Teil "Grundlagen des Information Retrieval" werden Schwerpunkte des Information Retrieval unter dem Aspekt ihrer Relevanz für die praktische Recherchedurchführung behandelt. Im zweiten Teil "Methoden des Information Retrieval" erfolgt eine umfassende Einführung in die verschiedenen Methoden des Information Retrieval anhand ausgewählter Retrievalsprachen und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen. Im dritten Teil "Fachbezogenes Information Retrieval" wird erstmalig in dieser Auflage das fachbezogene Information Retrieval mit den Schwerpunkten "Wirtschaftsinformation" und "Naturwissenschaftlich-technische Information" einbezogen.
    Footnote
    Rez. in: Information: Wissenschafft & Praxis 56(2005) H.5/6, S.337 (W. Ratzek): "Das zentrale Thema dieses Buches ist das Information Retrieval in Fachinformationsdatenbanken. Seit der ersten Auflage von 1998 liegt nun bereits eine aktualisierte 4. Auflage vor. Neu ist beispielsweise das Kapitel "Fachbezogenes Information Retrieval", das bisher in anderen Büchern der Schriftenreihe behandelt worden war. Die drei Teile des Buches behandeln - die "Grundlagen des Information Retrieval", d.h. u.a. Grundbegriffe, Arten und Anbieter von Datenbanken, Vorbereitung und Durchführung von Recherchen, Retrievalsprachen; - die "Methoden des Information Retrieval", hier geht es im Wesentlichen um die Anwendung und Funktion des Information Retrieval, d.h. Kommando-Retrieval, widowsbasierte Retrievaltools und Web-Search; - "Fachbezogenes Information Retrieval", wobei der Schwerpunkt auf der Wirtschaftsinformation liegt. Zur Gestaltung des Buches heißt es (S. 6): "Für die Darstellung der Inhalte wurde von Anfang an eine komprimierte Form gewählt, die den Studierenden zum einen in der gedruckten Buchausgabe als Begleitmaterial zur Lehre dienen soll und zum anderen die Grundlage für eine Online-Tutorial liefert, das sich gegenwärtig in der Testphase befindet." Damit sind Zielsetzung und Zielgruppe des Bandes benannt. Falls dieses Buch auch nicht-studentische Zielgruppen ansprechen soll, dann erscheint mir, aber auch einer Reihe von Kollegen, die Präsentationsform verbesserungswürdig. Die "komprimierte Form" erinnert an unkommentierte Vorlesungsfolien. Information Retrieval als Werkzeug für Recherchen in Fachinformationsdatenbanken erscheint vor dem Hintergrund der Diskussion über Informationsressourcen für das Wissensmanagements in Organisationen und deren Globalisierungstendenzen erweiterungsbedürftig. Das Konzept des Verlags, eine Schriftenreihe "Materialien zur Information und Dokumentation" herauszugeben, ist zu begrüßen."
    Series
    Materialien zur Information und Dokumentation; Bd.5
  16. Rowley, J.E.; Farrow, J.: Organizing knowledge : an introduction to managing access to information (2000) 0.03
    0.027255453 = product of:
      0.07268121 = sum of:
        0.024889207 = weight(_text_:web in 2463) [ClassicSimilarity], result of:
          0.024889207 = score(doc=2463,freq=2.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.18028519 = fieldWeight in 2463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
        0.023885157 = weight(_text_:information in 2463) [ClassicSimilarity], result of:
          0.023885157 = score(doc=2463,freq=22.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.32163754 = fieldWeight in 2463, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2463)
        0.023906851 = product of:
          0.047813702 = sum of:
            0.047813702 = weight(_text_:retrieval in 2463) [ClassicSimilarity], result of:
              0.047813702 = score(doc=2463,freq=10.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.37365708 = fieldWeight in 2463, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2463)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    For its third edition this standard text on knowledge organization and retrieval has been extensively revised and restructured to accommodate the increased significance of electronic information resources. With the help of many new sections on topics such as information retrieval via the Web, metadata and managing information retrieval systems, the book explains principles relating to hybrid print-based and electronic, networked environments experienced by today's users. Part I, Information Basics, explores the nature of information and knowledge and their incorporation into documents. Part II, Records, focuses specifically on electronic databases for accessing print or electronic media. Part III, Access, explores the range of tools for accessing information resources and covers interfaces, indexing and searching languages, classification, thesauri and catalogue and bibliographic access points. Finally, Part IV, Systems, describes the contexts through which knowledge can be organized and retrieved, including OPACs, the Internet, CD-ROMs, online search services and printed indexes and documents. This book is a comprehensive and accessible introduction to knowledge organization for both undergraduate and postgraduate students of information management and information systems
    LCSH
    Information storage and retrieval systems / Management
    Subject
    Information storage and retrieval systems / Management
  17. Chu, H.: Information representation and retrieval in the digital age (2010) 0.03
    0.025186658 = product of:
      0.10074663 = sum of:
        0.042775594 = weight(_text_:information in 377) [ClassicSimilarity], result of:
          0.042775594 = score(doc=377,freq=36.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.5760162 = fieldWeight in 377, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=377)
        0.057971034 = product of:
          0.11594207 = sum of:
            0.11594207 = weight(_text_:retrieval in 377) [ClassicSimilarity], result of:
              0.11594207 = score(doc=377,freq=30.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.90607023 = fieldWeight in 377, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=377)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    Information representation and retrieval : an overview -- Information representation I : basic approaches -- Information representation II : related topics -- Language in information representation and retrieval -- Retrieval techniques and query representation -- Retrieval approaches -- Information retrieval models -- Information retrieval systems -- Retrieval of information unique in content or format -- The user dimension in information representation and retrieval -- Evaluation of information representation and retrieval -- Artificial intelligence in information representation and retrieval.
    Imprint
    Medford, NJ : Information Today
    LCSH
    Information organization
    Information retrieval
    Information storage and retrieval systems
    Subject
    Information organization
    Information retrieval
    Information storage and retrieval systems
  18. Grundlagen der praktischen Information und Dokumentation : Handbuch zur Einführung in die Informationswissenschaft und -praxis (2013) 0.02
    0.024197705 = product of:
      0.06452721 = sum of:
        0.03484489 = weight(_text_:web in 4382) [ClassicSimilarity], result of:
          0.03484489 = score(doc=4382,freq=8.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.25239927 = fieldWeight in 4382, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
        0.01671961 = weight(_text_:information in 4382) [ClassicSimilarity], result of:
          0.01671961 = score(doc=4382,freq=22.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.22514628 = fieldWeight in 4382, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
        0.012962718 = product of:
          0.025925435 = sum of:
            0.025925435 = weight(_text_:retrieval in 4382) [ClassicSimilarity], result of:
              0.025925435 = score(doc=4382,freq=6.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.20260347 = fieldWeight in 4382, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4382)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Seit vierzig Jahren vermittelt das Standardwerk Wissenschaftlern, Praktikern und Studierenden Grundlagen der professionellen, wissenschaftlich fundierten Informationsarbeit. Mit der 6., völlig neu gefassten Auflage reagieren die Herausgeber Rainer Kuhlen, Wolfgang Semar und Dietmar Strauch auf die erheblichen technischen, methodischen und organisatorischen Veränderungen auf dem Gebiet der Information und Dokumentation und tragen damit der raschen Entwicklung des Internets und der Informationswissenschaft Rechnung. Die insgesamt über fünfzig Beiträge sind vier Teilen - Grundlegendes (A), Methodisches (B), Informationsorganisation (C) und Informationsinfrastrukturen (D) - zugeordnet.
    Content
    Enthält die Beiträge: A: Grundlegendes Rainer Kuhlen: Information - Informationswissenschaft - Ursula Georgy: Professionalisierung in der Informationsarbeit - Thomas Hoeren: Urheberrecht und Internetrecht - Stephan Holländer, Rolf A. Tobler: Schweizer Urheberrecht im digitalen Umfeld - Gerhard Reichmann: Urheberrecht und Internetrecht: Österreich - Rainer Kuhlen: Wissensökologie - Wissen und Information als Commons (Gemeingüter) - Rainer Hammwöhner: Hypertext - Christa Womser-Hacker, Thomas Mandl: Information Seeking Behaviour (ISB) - Hans-Christoph Hobohm: Informationsverhalten (Mensch und Information) - Urs Dahinden: Methoden empirischer Sozialforschung für die Informationspraxis - Michael Seadle: Ethnografische Verfahren der Datenerhebung - Hans-Christoph Hobohm: Erhebungsmethoden in der Informationsverhaltensforschung
    B: Methodisches Bernard Bekavac: Web-Technologien - Rolf Assfalg: Metadaten - Ulrich Reimer: Wissensorganisation - Thomas Mandl: Text Mining und Data Mining - Harald Reiterer, Hans-Christian Jetter: Informationsvisualisierung - Katrin Weller: Ontologien - Stefan Gradmann: Semantic Web und Linked Open Data - Isabella Peters: Benutzerzentrierte Erschließungsverfahre - Ulrich Reimer: Empfehlungssysteme - Udo Hahn: Methodische Grundlagen der Informationslinguistik - Klaus Lepsky: Automatische Indexierung - Udo Hahn: Automatisches Abstracting - Ulrich Heid: Maschinelle Übersetzung - Bernd Ludwig: Spracherkennung - Norbert Fuhr: Modelle im Information Retrieval - Christa Womser-Hacker: Kognitives Information Retrieval - Alexander Binder, Frank C. Meinecke, Felix Bießmann, Motoaki Kawanabe, Klaus-Robert Müller: Maschinelles Lernen, Mustererkennung in der Bildverarbeitung
    C: Informationsorganisation Helmut Krcmar: Informations- und Wissensmanagement - Eberhard R. Hilf, Thomas Severiens: Vom Open Access für Dokumente und Daten zu Open Content in der Wissenschaft - Christa Womser-Hacker: Evaluierung im Information Retrieval - Joachim Griesbaum: Online-Marketing - Nicola Döring: Modelle der Computervermittelten Kommunikation - Harald Reiterer, Florian Geyer: Mensch-Computer-Interaktion - Steffen Staab: Web Science - Michael Weller, Elena Di Rosa: Lizenzierungsformen - Wolfgang Semar, Sascha Beck: Sicherheit von Informationssystemen - Stefanie Haustein, Dirk Tunger: Sziento- und bibliometrische Verfahren
    D: Informationsinfrastruktur Dirk Lewandowski: Suchmaschinen - Ben Kaden: Elektronisches Publizieren - Jens Olf, Uwe Rosemann: Dokumentlieferung - Reinhard Altenhöner, Sabine Schrimpf: Langzeitarchivierung - Hermann Huemer: Normung und Standardisierung - Ulrike Spree: Wörterbücher und Enzyklopädien - Joachim Griesbaum: Social Web - Jens Klump, Roland Bertelmann: Forschungsdaten - Michael Kerres, Annabell Preussler, Mandy Schiefner-Rohs: Lernen mit Medien - Angelika Menne-Haritz: Archive - Axel Ermert, Karin Ludewig: Museen - Hans-Christoph Hobohm: Bibliothek im Wandel - Thomas Breyer-Mayländer: Medien, Medienwirtschaft - Helmut Wittenzellner: Transformation von Buchhandel, Verlag und Druck - Elke Thomä, Heike Schwanbeck: Patentinformation und Patentinformationssysteme
    RSWK
    Information und Dokumentation
    Subject
    Information und Dokumentation
  19. ¬The discipline of organizing (2013) 0.02
    0.023904815 = product of:
      0.06374618 = sum of:
        0.028158922 = weight(_text_:web in 2172) [ClassicSimilarity], result of:
          0.028158922 = score(doc=2172,freq=4.0), product of:
            0.13805464 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042302497 = queryNorm
            0.2039694 = fieldWeight in 2172, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2172)
        0.020772722 = weight(_text_:information in 2172) [ClassicSimilarity], result of:
          0.020772722 = score(doc=2172,freq=26.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.2797255 = fieldWeight in 2172, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2172)
        0.014814535 = product of:
          0.02962907 = sum of:
            0.02962907 = weight(_text_:retrieval in 2172) [ClassicSimilarity], result of:
              0.02962907 = score(doc=2172,freq=6.0), product of:
                0.12796146 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.042302497 = queryNorm
                0.23154683 = fieldWeight in 2172, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2172)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Organizing is such a common activity that we often do it without thinking much about it. In our daily lives we organize physical things--books on shelves, cutlery in kitchen drawers--and digital things--Web pages, MP3 files, scientific datasets. Millions of people create and browse Web sites, blog, tag, tweet, and upload and download content of all media types without thinking "I'm organizing now" or "I'm retrieving now." This book offers a framework for the theory and practice of organizing that integrates information organization (IO) and information retrieval (IR), bridging the disciplinary chasms between Library and Information Science and Computer Science, each of which views and teaches IO and IR as separate topics and in substantially different ways. It introduces the unifying concept of an Organizing System--an intentionally arranged collection of resources and the interactions they support--and then explains the key concepts and challenges in the design and deployment of Organizing Systems in many domains, including libraries, museums, business information systems, personal information management, and social computing. Intended for classroom use or as a professional reference, the book covers the activities common to all organizing systems: identifying resources to be organized; organizing resources by describing and classifying them; designing resource-based interactions; and maintaining resources and organization over time. The book is extensively annotated with disciplinary-specific notes to ground it with relevant concepts and references of library science, computing, cognitive science, law, and business.
    LCSH
    Information organization
    Information resources management
    RSWK
    Information / Information Retrieval / Organisationslehre
    Subject
    Information / Information Retrieval / Organisationslehre
    Information organization
    Information resources management
  20. Kaushik, S.K.: DDC 22 : a practical approach (2004) 0.02
    0.02389089 = product of:
      0.06370904 = sum of:
        0.027619982 = weight(_text_:world in 1842) [ClassicSimilarity], result of:
          0.027619982 = score(doc=1842,freq=2.0), product of:
            0.16259687 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.042302497 = queryNorm
            0.16986786 = fieldWeight in 1842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=1842)
        0.005761317 = weight(_text_:information in 1842) [ClassicSimilarity], result of:
          0.005761317 = score(doc=1842,freq=2.0), product of:
            0.0742611 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.042302497 = queryNorm
            0.0775819 = fieldWeight in 1842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1842)
        0.030327743 = product of:
          0.060655486 = sum of:
            0.060655486 = weight(_text_:22 in 1842) [ClassicSimilarity], result of:
              0.060655486 = score(doc=1842,freq=14.0), product of:
                0.14813614 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042302497 = queryNorm
                0.4094577 = fieldWeight in 1842, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1842)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    A system of library classification that flashed across the inquiring mind of young Melvil Louis Kossuth Dewey (known as Melvil Dewey) in 1873 is still the most popular classification scheme.. The modern library classification begins with Dewey Decimal Classification (DDC). Melvil Dewey devised DDC in 1876. DDC has is credit of 128 years of boudless success. The DDC is being taught as a practical subject throughout the world and it is being used in majority of libraries in about 150 countries. This is the result of continuous revision that 22nd Edition of DDC has been published in July 2003. No other classification scheme has published so many editions. Some welcome changes have been made in DDC 22. To reduce the Christian bias in 200 religion, the numbers 201 to 209 have been devoted to specific aspects of religion. In the previous editions these numbers were devoted to Christianity. to enhance the classifier's efficiency, Table 7 has been removed from DDC 22 and the provision of adding group of persons is made by direct use of notation already available in schedules and in notation -08 from Table 1 Standard Subdivision. The present book is an attempt to explain, with suitable examples, the salient provisions of DDC 22. The book is written in simple language so that the students may not face any difficulty in understanding what is being explained. The examples in the book are explained in a step-by-step procedure. It is hoped that this book will prove of great help and use to the library professionals in general and library and information science students in particular.
    Content
    1. Introduction to DDC 22 2. Major changes in DDC 22 3. Introduction to the schedules 4. Use of Table 1 : Standard Subdivisions 5. Use of Table 2 : Areas 6. Use of Table 3 : Subdivisions for the arts, for individual literatures, for specific literary forms 7. Use to Table 4 : Subdivisions of individual languages and language families 8. Use of Table 5 : Ethic and National groups 9. Use of Table 6 : Languages 10. Treatment of Groups of Persons
    Object
    DDC-22

Years

Languages

  • e 86
  • d 40

Types

  • m 107
  • s 12
  • a 11
  • el 2
  • x 1
  • More… Less…

Subjects

Classifications