Search (44 results, page 1 of 3)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Stolpmann, M.: Internet & WWW für Studenten : WWW, FTP, E-Mail und andere Dienste (1997) 0.09
    0.08527205 = product of:
      0.21318012 = sum of:
        0.13820271 = weight(_text_:wide in 3438) [ClassicSimilarity], result of:
          0.13820271 = score(doc=3438,freq=4.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.69230604 = fieldWeight in 3438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.078125 = fieldNorm(doc=3438)
        0.07497741 = weight(_text_:web in 3438) [ClassicSimilarity], result of:
          0.07497741 = score(doc=3438,freq=4.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.5099235 = fieldWeight in 3438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=3438)
      0.4 = coord(2/5)
    
    RSWK
    World wide web / Studium / Ratgeber (213)
    Subject
    World wide web / Studium / Ratgeber (213)
  2. Bekavac, B.: Suchverfahren und Suchdienste des World Wide Web (1996) 0.08
    0.08356267 = product of:
      0.13927111 = sum of:
        0.05863444 = weight(_text_:wide in 4803) [ClassicSimilarity], result of:
          0.05863444 = score(doc=4803,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.29372054 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4803)
        0.03181022 = weight(_text_:web in 4803) [ClassicSimilarity], result of:
          0.03181022 = score(doc=4803,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.21634221 = fieldWeight in 4803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4803)
        0.048826456 = product of:
          0.09765291 = sum of:
            0.09765291 = weight(_text_:server in 4803) [ClassicSimilarity], result of:
              0.09765291 = score(doc=4803,freq=2.0), product of:
                0.25762302 = queryWeight, product of:
                  5.7180014 = idf(docFreq=394, maxDocs=44218)
                  0.04505473 = queryNorm
                0.37905353 = fieldWeight in 4803, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7180014 = idf(docFreq=394, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4803)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Das WWW ermöglicht den einfachen Zugriff auf eine große und stark wachsende Menge an Informationen. Die gezielte Suche nach relevanten WWW-Dokumenten wird zunehmens zu einer zentralen Aufgabe innerhalb des WWW. Dieser Beitrag beschreibt verschiedene Verfahren, die zur lokalen und globalen Suche im WWW verwendet werden. Client-basierte Suchtools mit automatischer Navigation, die bei einigen wenigen WWW-Browsern fest implementiert sind, ermöglichen eine Suche ausgehend von einer Startseite. Weitaus breiter Anwendung finden aber Server-basierte Suchverfahren, die sowohl die lokale Suche innerhalb eines WWW-Servers als auch die weltweite Suche über WWW-Kataloge und roboterbasiertes Suchmaschinen ermöglichen. In diesem Beitrag werden gängige verzeichnis- und roboterbasierte Suchdienste des WWW von ihrer Funktionalität her untersucht und verglichen. Erwähnt werden aber auch Beispiele alternativer Suchmaschinen sowie erweiterter Suchdienste, die eine zusätzliche Suche auch außerhalb des Internet ermöglichen
  3. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.06
    0.06229263 = product of:
      0.15573157 = sum of:
        0.09574965 = weight(_text_:wide in 3346) [ClassicSimilarity], result of:
          0.09574965 = score(doc=3346,freq=12.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.47964367 = fieldWeight in 3346, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
        0.059981927 = weight(_text_:web in 3346) [ClassicSimilarity], result of:
          0.059981927 = score(doc=3346,freq=16.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.4079388 = fieldWeight in 3346, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
    LCSH
    World Wide Web / Computer programs
    Web search engines
    RSWK
    Suchmaschine / World Wide Web / Information Retrieval
    Subject
    Suchmaschine / World Wide Web / Information Retrieval
    World Wide Web / Computer programs
    Web search engines
  4. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.05
    0.054378774 = product of:
      0.090631284 = sum of:
        0.048862036 = weight(_text_:wide in 5773) [ClassicSimilarity], result of:
          0.048862036 = score(doc=5773,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.24476713 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.026508518 = weight(_text_:web in 5773) [ClassicSimilarity], result of:
          0.026508518 = score(doc=5773,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.18028519 = fieldWeight in 5773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5773)
        0.015260735 = product of:
          0.03052147 = sum of:
            0.03052147 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.03052147 = score(doc=5773,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Suchmaschinen im World Wide Web wird nachgesagt, dass sie - insbesondere im Vergleich zur Retrievalsoftware kommerzieller Online-Archive suboptimale Methoden und Werkzeuge einsetzen. Elaborierte befehlsorientierte Retrievalsysteme sind vom Laien gar nicht und vom Professional nur dann zu bedienen, wenn man stets damit arbeitet. Die Suchsysteme einiger "independents", also isolierter Informationsproduzenten im Internet, zeichnen sich durch einen Minimalismus aus, der an den Befehlsumfang anfangs der 70er Jahre erinnert. Retrievalsoftware in Intranets, wenn sie denn überhaupt benutzt wird, setzt fast ausnahmslos auf automatische Methoden von Indexierung und Retrieval und ignoriert dabei nahezu vollständig dokumentarisches Know how. Suchmaschinen bzw. Retrievalsysteme - wir wollen beide Bezeichnungen synonym verwenden - bereiten demnach, egal wo sie vorkommen, Schwierigkeiten. An ihrer Qualität wird gezweifelt. Aber was heißt überhaupt: Qualität von Suchmaschinen? Was zeichnet ein gutes Retrievalsystem aus? Und was fehlt einem schlechten? Wir wollen eine Liste von Kriterien entwickeln, die für gutes Suchen (und Finden!) wesentlich sind. Es geht also ausschließlich um Quantität und Qualität der Suchoptionen, nicht um weitere Leistungsindikatoren wie Geschwindigkeit oder ergonomische Benutzerschnittstellen. Stillschweigend vorausgesetzt wirdjedoch der Abschied von ausschließlich befehlsorientierten Systemen, d.h. wir unterstellen Bildschirmgestaltungen, die die Befehle intuitiv einleuchtend darstellen. Unsere Checkliste enthält nur solche Optionen, die entweder (bei irgendwelchen Systemen) schon im Einsatz sind (und wiederholt damit zum Teil Altbekanntes) oder deren technische Realisierungsmöglichkeit bereits in experimentellen Umgebungen aufgezeigt worden ist. insofern ist die Liste eine Minimalforderung an Retrievalsysteme, die durchaus erweiterungsfähig ist. Gegliedert wird der Kriterienkatalog nach (1.) den Basisfunktionen zur Suche singulärer Datensätze, (2.) den informetrischen Funktionen zur Charakterisierunggewisser Nachweismengen sowie (3.) den Kriterien zur Mächtigkeit automatischer Indexierung und natürlichsprachiger Suche
    Source
    Password. 2000, H.5, S.22-31
  5. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.05
    0.04917531 = product of:
      0.122938275 = sum of:
        0.042315766 = weight(_text_:wide in 468) [ClassicSimilarity], result of:
          0.042315766 = score(doc=468,freq=6.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.21197456 = fieldWeight in 468, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
        0.08062251 = weight(_text_:web in 468) [ClassicSimilarity], result of:
          0.08062251 = score(doc=468,freq=74.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.548316 = fieldWeight in 468, product of:
              8.602325 = tf(freq=74.0), with freq of:
                74.0 = termFreq=74.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.4 = coord(2/5)
    
    Abstract
    The development of the Semantic Web, with machine-readable content, has the potential to revolutionise the World Wide Web and its use. A Semantic Web Primer provides an introduction and guide to this emerging field, describing its key ideas, languages and technologies. Suitable for use as a textbook or for self-study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own. It includes exercises, project descriptions and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL and rules) and technologies (explicit metadata, ontologies and logic and interference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processable semantics; and OWL, the W3C-approved standard for a Web ontology language more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    LCSH
    Semantic Web
    Subject
    Semantic Web
    Theme
    Semantic Web
  6. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.05
    0.046459846 = product of:
      0.07743307 = sum of:
        0.02931722 = weight(_text_:wide in 729) [ClassicSimilarity], result of:
          0.02931722 = score(doc=729,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.14686027 = fieldWeight in 729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.038959406 = weight(_text_:web in 729) [ClassicSimilarity], result of:
          0.038959406 = score(doc=729,freq=12.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.26496404 = fieldWeight in 729, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=729)
        0.00915644 = product of:
          0.01831288 = sum of:
            0.01831288 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.01831288 = score(doc=729,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In diesem Buch sollen die grundlegenden Techniken zur Erstellung, Anwendung und nicht zuletzt Darstellung von XML-Dokumenten erklärt und demonstriert werden. Die wichtigste und vornehmste Aufgabe dieses Buches ist es jedoch, die Grundlagen von XML, wie sie vom World Wide Web Consortium (W3C) festgelegt sind, darzustellen. Das W3C hat nicht nur die Entwicklung von XML initiiert und ist die zuständige Organisation für alle XML-Standards, es werden auch weiterhin XML-Spezifikationen vom W3C entwickelt. Auch wenn immer mehr Vorschläge für neue XML-basierte Techniken aus dem weiteren Umfeld der an XML Interessierten kommen, so spielt doch weiterhin das W3C die zentrale und wichtigste Rolle für die Entwicklung von XML. Der Schwerpunkt dieses Buches liegt darin, zu lernen, wie man XML als tragende Technologie in echten Alltags-Anwendungen verwendet. Wir wollen Ihnen gute Design-Techniken vorstellen und demonstrieren, wie man XML-fähige Anwendungen mit Applikationen für das WWW oder mit Datenbanksystemen verknüpft. Wir wollen die Grenzen und Möglichkeiten von XML ausloten und eine Vorausschau auf einige "nascent"-Technologien werfen. Egal ob Ihre Anforderungen sich mehr an dem Austausch von Daten orientieren oder bei der visuellen Gestaltung liegen, dieses Buch behandelt alle relevanten Techniken. jedes Kapitel enthält ein Anwendungsbeispiel. Da XML eine Plattform-neutrale Technologie ist, werden in den Beispielen eine breite Palette von Sprachen, Parsern und Servern behandelt. Jede der vorgestellten Techniken und Methoden ist auf allen Plattformen und Betriebssystemen relevant. Auf diese Weise erhalten Sie wichtige Einsichten durch diese Beispiele, auch wenn die konkrete Implementierung nicht auf dem von Ihnen bevorzugten System durchgeführt wurde.
    Dieses Buch wendet sich an alle, die Anwendungen auf der Basis von XML entwickeln wollen. Designer von Websites können neue Techniken erlernen, wie sie ihre Sites auf ein neues technisches Niveau heben können. Entwickler komplexerer Software-Systeme und Programmierer können lernen, wie XML in ihr System passt und wie es helfen kann, Anwendungen zu integrieren. XML-Anwendungen sind von ihrer Natur her verteilt und im Allgemeinen Web-orientiert. Dieses Buch behandelt nicht verteilte Systeme oder die Entwicklung von Web-Anwendungen, sie brauchen also keine tieferen Kenntnisse auf diesen Gebieten. Ein allgemeines Verständnis für verteilte Architekturen und Funktionsweisen des Web wird vollauf genügen. Die Beispiele in diesem Buch verwenden eine Reihe von Programmiersprachen und Technologien. Ein wichtiger Bestandteil der Attraktivität von XML ist seine Plattformunabhängigkeit und Neutralität gegenüber Programmiersprachen. Sollten Sie schon Web-Anwendungen entwickelt haben, stehen die Chancen gut, dass Sie einige Beispiele in Ihrer bevorzugten Sprache finden werden. Lassen Sie sich nicht entmutigen, wenn Sie kein Beispiel speziell für Ihr System finden sollten. Tools für die Arbeit mit XML gibt es für Perl, C++, Java, JavaScript und jede COM-fähige Sprache. Der Internet Explorer (ab Version 5.0) hat bereits einige Möglichkeiten zur Verarbeitung von XML-Dokumenten eingebaut. Auch der Mozilla-Browser (der Open-Source-Nachfolger des Netscape Navigators) bekommt ähnliche Fähigkeiten. XML-Tools tauchen auch zunehmend in großen relationalen Datenbanksystemen auf, genau wie auf Web- und Applikations-Servern. Sollte Ihr System nicht in diesem Buch behandelt werden, lernen Sie die Grundlagen und machen Sie sich mit den vorgestellten Techniken aus den Beispielen vertraut.
    Date
    22. 6.2005 15:12:11
  7. Vonhoegen, H.: Einstieg in XML (2002) 0.02
    0.017128954 = product of:
      0.042822383 = sum of:
        0.03213987 = weight(_text_:web in 4002) [ClassicSimilarity], result of:
          0.03213987 = score(doc=4002,freq=6.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.21858418 = fieldWeight in 4002, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4002)
        0.010682514 = product of:
          0.021365028 = sum of:
            0.021365028 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.021365028 = score(doc=4002,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  8. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.01
    0.01336616 = product of:
      0.0334154 = sum of:
        0.021206813 = weight(_text_:web in 3487) [ClassicSimilarity], result of:
          0.021206813 = score(doc=3487,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.14422815 = fieldWeight in 3487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3487)
        0.0122085875 = product of:
          0.024417175 = sum of:
            0.024417175 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.024417175 = score(doc=3487,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: Information: Wissenschaft & Praxis 56(2005) H.5/6, S.337 (W. Ratzek): "Bettina Brühl legt mit "Thesauri und Klassifikationen" ein Fleißarbeit vor. Das Buch mit seiner Auswahl von über 150 Klassifikationen und Thesauri aus Naturwissenschaft, Technik, Wirtschaft und Patenwesen macht es zu einem brauchbaren Nachschlagewerk, zumal auch ein umfassender Index nach Sachgebieten, nach Datenbanken und nach Klassifikationen und Thesauri angeboten wird. Nach einer 13-seitigen Einführung (Kapitel 1 und 2) folgt mit dem 3. Kapitel die "Darstellung von Klassifikationen und Thesauri", im wesentlichen aus den Beschreibungen der Hersteller zusammengestellt. Hier werden Dokumentationssprachen der Fachgebiete - Naturwissenschaften (3.1) und deren Spezialisierungen wie zum Beispiel "Biowissenschaften und Biotechnologie", "Chemie" oder "Umwelt und Ökonomie", aber auch "Mathematik und Informatik" (?) auf 189 Seiten vorgestellt, - Technik mit zum Beispiel "Fachordnung Technik", "Subject Categories (INIS/ ETDE) mit 17 Seiten verhältnismäßig knapp abgehandelt, - Wirtschaft mit "Branchen-Codes", "Product-Codes", "Länder-Codes"",Fachklas-sifikationen" und "Thesauri" ausführlich auf 57 Seiten präsentiert, - Patente und Normen mit zum Beispiel "Europäische Patentklassifikation" oder "International Patent Classification" auf 33 Seiten umrissen. Jedes Teilgebiet wird mit einer kurzen Beschreibung eingeleitet. Danach folgen die jeweiligen Beschreibungen mit den Merkmalen: "Anschrift des Erstellers", "Themen-gebiet(e)", "Sprache", "Verfügbarkeit", "An-wendung" und "Ouelle(n)". "Das Buch wendet sich an alle Information Professionals, die Dokumentationssprachen aufbauen und nutzen" heißt es in der Verlagsinformation. Zwar ist es nicht notwendig, die informationswissenschaftlichen Aspekte der Klassifikationen und Thesauri abzuhandeln, aber ein Hinweis auf die Bedeutung der Information und Dokumentation und/oder der Informationswissenschaft wäre schon angebracht, um in der Welt der Informations- und Wissenswirtschaft zu demonstrieren, welchen Beitrag unsere Profession leistet. Andernfalls bleibt das Blickfeld eingeschränkt und der Anschluss an neuere Entwicklungen ausgeblendet. Dieser Anknüpfungspunkt wäre beispielsweise durch einen Exkurs über Topic Map/Semantic Web gegeben. Der Verlag liefert mit der Herausgabe die ses Kompendiums einen nützlichen ersten Baustein zu einem umfassenden Verzeichnis von Thesauri und Klassifikationen."
    Series
    Materialien zur Information und Dokumentation; Bd.22
  9. Broughton, V.: Essential classification (2004) 0.01
    0.012059288 = product of:
      0.03014822 = sum of:
        0.019544814 = weight(_text_:wide in 2824) [ClassicSimilarity], result of:
          0.019544814 = score(doc=2824,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.09790685 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
        0.0106034065 = weight(_text_:web in 2824) [ClassicSimilarity], result of:
          0.0106034065 = score(doc=2824,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.07211407 = fieldWeight in 2824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
      0.4 = coord(2/5)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  10. Ferl, T.E.; Millsap, L.: Subject cataloging : a how-to-do-it workbook (1991) 0.01
    0.011726889 = product of:
      0.05863444 = sum of:
        0.05863444 = weight(_text_:wide in 797) [ClassicSimilarity], result of:
          0.05863444 = score(doc=797,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.29372054 = fieldWeight in 797, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=797)
      0.2 = coord(1/5)
    
    Abstract
    This companion to the author's 'Descriptive cataloging' provides both the principles and the application of subject cataloging. For most libraries, there are two distinct features of this practice: subject classification and apllication of subject headings. This workbook presents a wide range of examples, including print and nonprint formats, as well as exercises for MARC tagging practice. The explanation of the rules applied are clear, with specific reference to the manual used. The section about subject cataloging strategies is excellent for all catalogers. Highly recommended for all libraries using either Dewey Decimal or Library of Congress classification, as well as the Library of Congress Subject Headings
  11. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.01
    0.010929741 = product of:
      0.054648705 = sum of:
        0.054648705 = weight(_text_:web in 2050) [ClassicSimilarity], result of:
          0.054648705 = score(doc=2050,freq=34.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.37166741 = fieldWeight in 2050, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2050)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
    The chapter concludes with an appendix of search tips that even seasoned searchers will appreciate; these tips cover the complete search process, from preparation to the examination of results. Chapter six is appropriately entitled "Around the Corner," as it provides the reader with a glimpse of the future of subject access for the Web. Text mining, visualization, machine-aided indexing, and other topics are raised here to whet the reader's appetite for what is yet to come. As the author herself notes in these final pages, librarians will likely increase the depth of their collaboration with software engineers, knowledge managers and others outside of the traditional library community, and thereby push the boundaries of subject access for the digital world. This final chapter leaves this reviewer wanting a second volume of the book, one that might explore these additional topics, as they evolve over the coming years. One characteristic of any book that addresses trends related to the Internet is how quickly the text becomes dated. However, as the author herself asserts, there are core principles related to subject analysis that stand the test of time, leaving the reader with a text that may be generalized well beyond the publication date. In this, Schwartz's text is similar to other recent publications (e.g., Jakob Nielsen's Web Usability, also published in 2001) that acknowledge the mutability of the Web, and therefore discuss core principles and issues that may be applied as the medium itself evolves. This approach to the writing makes this a useful book for those teaching in the areas of subject analysis, information retrieval and Web development for possible consideration as a course text. Although the websites used here may need to be supplemented with more current examples in the classroom, the core content of the book will be relevant for many years to come. Although one might expect that any book taking subject access as its focus world, itself, be easy to navigate, this is not always the case. In this text, however, readers will be pleased to find that no small detail in content access has been spared. The subject Index is thorough and well-crafted, and the inclusion of an exhaustive author index is particularly useful for quick reference. In addition, the table of contents includes sub-themes for each chapter, and a complete table of figures is provided. While the use of colour figures world greatly enhance the text, all black-andwhite images are clear and sharp, a notable fact given that most of the figures are screen captures of websites or database entries. In addition, the inclusion of comprehensive reference lists at the close of each chapter makes this a highly readable text for students and instructors alike; each section of the book can stand as its own "expert review" of the topic at hand. In both content and structure this text is highly recommended. It certainly meets its intended goal of providing a timely introduction to the methods and problems of subject access in the Web environment, and does so in a way that is readable, interesting and engaging."
  12. Bowman, J.H.: Essential Dewey (2005) 0.01
    0.010881628 = product of:
      0.02720407 = sum of:
        0.014995482 = weight(_text_:web in 359) [ClassicSimilarity], result of:
          0.014995482 = score(doc=359,freq=4.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.1019847 = fieldWeight in 359, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=359)
        0.0122085875 = product of:
          0.024417175 = sum of:
            0.024417175 = weight(_text_:22 in 359) [ClassicSimilarity], result of:
              0.024417175 = score(doc=359,freq=8.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.15476047 = fieldWeight in 359, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=359)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this book, John Bowman provides an introduction to the Dewey Decimal Classification suitable either for beginners or for librarians who are out of practice using Dewey. He outlines the content and structure of the scheme and then, through worked examples using real titles, Shows readers how to use it. Most chapters include practice exercises, to which answers are given at the end of the book. A particular feature of the book is the chapter dealing with problems of specific parts of the scheme. Later chapters offer advice and how to cope with compound subjects, and a brief introduction to the Web version of Dewey.
    Content
    "The contents of the book cover: This book is intended as an introduction to the Dewey Decimal Classification, edition 22. It is not a substitute for it, and I assume that you have it, all four volumes of it, by you while reading the book. I have deliberately included only a short section an WebDewey. This is partly because WebDewey is likely to change more frequently than the printed version, but also because this book is intended to help you use the scheme regardless of the manifestation in which it appears. If you have a subscription to WebDewey and not the printed volumes you may be able to manage with that, but you may then find my references to volumes and page numbers baffling. All the examples and exercises are real; what is not real is the idea that you can classify something without seeing more than the title. However, there is nothing that I can do about this, and I have therefore tried to choose examples whose titles adequately express their subject-matter. Sometimes when you look at the 'answers' you may feel that you have been cheated, but I hope that this will be seldom. Two people deserve special thanks. My colleague Vanda Broughton has read drafts of the book and made many suggestions. Ross Trotter, chair of the CILIP Dewey Decimal Classification Committee, who knows more about Dewey than anyone in Britain today, has commented extensively an it and as far as possible has saved me from error, as well as suggesting many improvements. What errors remain are due to me alone. Thanks are also owed to OCLC Online Computer Library Center, for permission to reproduce some specimen pages of DDC 22. Excerpts from the Dewey Decimal Classification are taken from the Dewey Decimal Classification and Relative Index, Edition 22 which is Copyright 2003 OCLC Online Computer Library Center, Inc. DDC, Dewey, Dewey Decimal Classification and WebDewey are registered trademarks of OCLC Online Computer Library Center, Inc."
    Footnote
    "The title says it all. The book contains the essentials for a fundamental understanding of the complex world of the Dewey Decimal Classification. It is clearly written and captures the essence in a concise and readable style. Is it a coincidence that the mysteries of the Dewey Decimal System are revealed in ten easy chapters? The typography and layout are clear and easy to read and the perfect binding withstood heavy use. The exercises and answers are invaluable in illustrating the points of the several chapters. The book is well structured. Chapter 1 provides an "Introduction and background" to classification in general and Dewey in particular. Chapter 2 describes the "Outline of the scheme" and the conventions in the schedules and tables. Chapter 3 covers "Simple subjects" and introduces the first of the exercises. Chapters 4 and 5 describe "Number-building" with "standard subdivisions" in the former and "other methods" in the latter. Chapter 6 provides an excellent description of "Preference order" and Chapter 7 deals with "Exceptions and options." Chapter 8 "Special subjects," while no means exhaustive, gives a thorough analysis of problems with particular parts of the schedules from "100 Philosophy" to "910 Geography" with a particular discussion of "'Persons treatment"' and "Optional treatment of biography." Chapter 9 treats "Compound subjects." Chapter 10 briefly introduces WebDewey and provides the URL for the Web Dewey User Guide http://www.oclc.org/support/documentation/dewey/ webdewey_userguide/; the section for exercises says: "You are welcome to try using WebDewey an the exercises in any of the preceding chapters." Chapters 6 and 7 are invaluable at clarifying the options and bases for choice when a work is multifaceted or is susceptible of classification under different Dewey Codes. The recommendation "... not to adopt options, but use the scheme as instructed" (p. 71) is clearly sound. As is, "What is vital, of course, is that you keep a record of the decisions you make and to stick to them. Any option Chosen must be used consistently, and not the whim of the individual classifier" (p. 71). The book was first published in the UK and the British overtones, which may seem quite charming to a Canadian, may be more difficult for readers from the United States. The correction of Dewey's spelling of Labor to Labo [u] r (p. 54) elicited a smile for the championing of lost causes and some relief that we do not have to cope with 'simplified speling.' The down-to-earth opinions of the author, which usually agree with those of the reviewer, add savour to the text and enliven what might otherwise have been a tedious text indeed. However, in the case of (p. 82):
    Object
    DDC-22
  13. Eversberg, B.: Was sind und was sollen bibliothekarische Datenformate (1994) 0.01
    0.010603407 = product of:
      0.053017035 = sum of:
        0.053017035 = weight(_text_:web in 1742) [ClassicSimilarity], result of:
          0.053017035 = score(doc=1742,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.36057037 = fieldWeight in 1742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1742)
      0.2 = coord(1/5)
    
    Footnote
    Neuere Ausgaben nur über die Web-Seite: http://www.allegro-c.de/allegro/formate/formate.htm
  14. McIlwaine, I.C.: ¬The Universal Decimal Classification : a guide to its use (2000) 0.01
    0.009772408 = product of:
      0.048862036 = sum of:
        0.048862036 = weight(_text_:wide in 161) [ClassicSimilarity], result of:
          0.048862036 = score(doc=161,freq=2.0), product of:
            0.19962662 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.04505473 = queryNorm
            0.24476713 = fieldWeight in 161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=161)
      0.2 = coord(1/5)
    
    Abstract
    This book is an extension and total revision of the author's earlier Guide to the use of UDC. The original was written in 1993 and in the intervening years much has happened with the classification. In particular, a much more rigorous approach has been undertaken in revision to ensure that the scheme is able to handle the requirements of a networked world. The book outlines the history and development of the Universal Decimal Classification, provides practical hints on its application and works through all the auxiliary and main tables highlighting aspects that need to be noted in applying the scheme. It also provides guidance on the use of the Master Reference File and discusses the ways in which the classification is used in the 21st century and its suitability as an aid to subject description in tagging metadata and consequently for application on the Internet. It is intended as a source for information about the scheme, for practical usage by classifiers in their daily work and as a guide to the student learning how to apply the classification. It is amply provided with examples to illustrate the many ways in which the scheme can be applied and will be a useful source for a wide range of information workers
  15. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.01
    0.0085460115 = product of:
      0.042730056 = sum of:
        0.042730056 = product of:
          0.08546011 = sum of:
            0.08546011 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
              0.08546011 = score(doc=3247,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.5416616 = fieldWeight in 3247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3247)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Object
    DDC-22
  16. Otto, M.: Suchstrategien im Internet : Search engines, Themenkataloge, Besprechungsdienste (1997) 0.01
    0.0074223853 = product of:
      0.037111927 = sum of:
        0.037111927 = weight(_text_:web in 2860) [ClassicSimilarity], result of:
          0.037111927 = score(doc=2860,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.25239927 = fieldWeight in 2860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2860)
      0.2 = coord(1/5)
    
    Abstract
    Im Internet lassen sich Informationen oft sekundenschnell gewinnen - vorausgesetzt, man verliert sich nicht im Dickicht von Web-Links. Im Buch erfährt der Leser, wie er mit Hilfe der im Internet zur Verfügung stehenden Suchhilfen rasch und ohne Umwege zum Ziel kommt. Der Autor beschreibt, wie eine Suchanfrage aufgebaut wird. Der nächste Schritt ist die Auswahl der jeweils effektivsten Suchhilfe. Tabellen und Übersichten zeigen anschaulich Vor- und Nachteile dieser Hilfen und geben einen Überblick über nutzbare Abfrageoptionen
  17. Poetzsch, E.: Information Retrieval : Einführung in Grundlagen und Methoden (1998) 0.01
    0.0074223853 = product of:
      0.037111927 = sum of:
        0.037111927 = weight(_text_:web in 239) [ClassicSimilarity], result of:
          0.037111927 = score(doc=239,freq=2.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.25239927 = fieldWeight in 239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=239)
      0.2 = coord(1/5)
    
    Content
    Teil 1: Grundlagen des Information Retrieval: Schwerpunkte des Information Retrieval mit Relevanz für die praktische Recherchedurchführung: Arbeitsschritte einer Recherche, Voraussetzungen für Online-Recherchen, Überblick über Arten von Datenbanken und über Hosts, Benutzerhilfen, Softwaretools, Retrievalsprachen und Kosten; Teil 2: Methoden des Information Retrieval: Einführung in die Methoden des Information Retrieval anhand ausgewählter Beispiele zu Retrievalsprachen, windows-basierten Retrievaltools und Web-Search-Möglichkeiten mittels hostspezifischer Suchoberflächen
  18. Grundlagen der praktischen Information und Dokumentation : Handbuch zur Einführung in die Informationswissenschaft und -praxis (2013) 0.01
    0.0074223853 = product of:
      0.037111927 = sum of:
        0.037111927 = weight(_text_:web in 4382) [ClassicSimilarity], result of:
          0.037111927 = score(doc=4382,freq=8.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.25239927 = fieldWeight in 4382, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4382)
      0.2 = coord(1/5)
    
    Content
    B: Methodisches Bernard Bekavac: Web-Technologien - Rolf Assfalg: Metadaten - Ulrich Reimer: Wissensorganisation - Thomas Mandl: Text Mining und Data Mining - Harald Reiterer, Hans-Christian Jetter: Informationsvisualisierung - Katrin Weller: Ontologien - Stefan Gradmann: Semantic Web und Linked Open Data - Isabella Peters: Benutzerzentrierte Erschließungsverfahre - Ulrich Reimer: Empfehlungssysteme - Udo Hahn: Methodische Grundlagen der Informationslinguistik - Klaus Lepsky: Automatische Indexierung - Udo Hahn: Automatisches Abstracting - Ulrich Heid: Maschinelle Übersetzung - Bernd Ludwig: Spracherkennung - Norbert Fuhr: Modelle im Information Retrieval - Christa Womser-Hacker: Kognitives Information Retrieval - Alexander Binder, Frank C. Meinecke, Felix Bießmann, Motoaki Kawanabe, Klaus-Robert Müller: Maschinelles Lernen, Mustererkennung in der Bildverarbeitung
    C: Informationsorganisation Helmut Krcmar: Informations- und Wissensmanagement - Eberhard R. Hilf, Thomas Severiens: Vom Open Access für Dokumente und Daten zu Open Content in der Wissenschaft - Christa Womser-Hacker: Evaluierung im Information Retrieval - Joachim Griesbaum: Online-Marketing - Nicola Döring: Modelle der Computervermittelten Kommunikation - Harald Reiterer, Florian Geyer: Mensch-Computer-Interaktion - Steffen Staab: Web Science - Michael Weller, Elena Di Rosa: Lizenzierungsformen - Wolfgang Semar, Sascha Beck: Sicherheit von Informationssystemen - Stefanie Haustein, Dirk Tunger: Sziento- und bibliometrische Verfahren
    D: Informationsinfrastruktur Dirk Lewandowski: Suchmaschinen - Ben Kaden: Elektronisches Publizieren - Jens Olf, Uwe Rosemann: Dokumentlieferung - Reinhard Altenhöner, Sabine Schrimpf: Langzeitarchivierung - Hermann Huemer: Normung und Standardisierung - Ulrike Spree: Wörterbücher und Enzyklopädien - Joachim Griesbaum: Social Web - Jens Klump, Roland Bertelmann: Forschungsdaten - Michael Kerres, Annabell Preussler, Mandy Schiefner-Rohs: Lernen mit Medien - Angelika Menne-Haritz: Archive - Axel Ermert, Karin Ludewig: Museen - Hans-Christoph Hobohm: Bibliothek im Wandel - Thomas Breyer-Mayländer: Medien, Medienwirtschaft - Helmut Wittenzellner: Transformation von Buchhandel, Verlag und Druck - Elke Thomä, Heike Schwanbeck: Patentinformation und Patentinformationssysteme
  19. Oberhauser, O.: Automatisches Klassifizieren : Verfahren zur Erschließung elektronischer Dokumente (2004) 0.01
    0.007346256 = product of:
      0.03673128 = sum of:
        0.03673128 = weight(_text_:web in 2487) [ClassicSimilarity], result of:
          0.03673128 = score(doc=2487,freq=6.0), product of:
            0.14703658 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.04505473 = queryNorm
            0.24981049 = fieldWeight in 2487, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2487)
      0.2 = coord(1/5)
    
    Abstract
    Automatisches Klassifizieren von Textdokumenten bedeutet die maschinelle Zuordnung jeweils einer oder mehrerer Notationen eines vorgegebenen Klassifikationssystems zu natürlich-sprachlichen Texten mithilfe eines geeigneten Algorithmus. In der vorliegenden Arbeit wird in Form einer umfassenden Literaturstudie ein aktueller Kenntnisstand zu den Ein-satzmöglichkeiten des automatischen Klassifizierens für die sachliche Erschliessung von elektronischen Dokumenten, insbesondere von Web-Ressourcen, erarbeitet. Dies betrifft zum einen den methodischen Aspekt und zum anderen die in relevanten Projekten und Anwendungen gewonnenen Erfahrungen. In methodischer Hinsicht gelten heute statistische Verfahren, die auf dem maschinellen Lernen basieren und auf der Grundlage bereits klassifizierter Beispieldokumente ein Modell - einen "Klassifikator" - erstellen, das zur Klassifizierung neuer Dokumente verwendet werden kann, als "state-of-the-art". Die vier in den 1990er Jahren an den Universitäten Lund, Wolverhampton und Oldenburg sowie bei OCLC (Dublin, OH) durchgeführten "grossen" Projekte zum automatischen Klassifizieren von Web-Ressourcen, die in dieser Arbeit ausführlich analysiert werden, arbeiteten allerdings noch mit einfacheren bzw. älteren methodischen Ansätzen. Diese Projekte bedeuten insbesondere aufgrund ihrer Verwendung etablierter bibliothekarischer Klassifikationssysteme einen wichtigen Erfahrungsgewinn, selbst wenn sie bisher nicht zu permanenten und qualitativ zufriedenstellenden Diensten für die Erschliessung elektronischer Ressourcen geführt haben. Die Analyse der weiteren einschlägigen Anwendungen und Projekte lässt erkennen, dass derzeit in den Bereichen Patent- und Mediendokumentation die aktivsten Bestrebungen bestehen, Systeme für die automatische klassifikatorische Erschliessung elektronischer Dokumente im laufenden operativen Betrieb einzusetzen. Dabei dominieren jedoch halbautomatische Systeme, die menschliche Bearbeiter durch Klassifizierungsvorschläge unterstützen, da die gegenwärtig erreichbare Klassifizierungsgüte für eine Vollautomatisierung meist noch nicht ausreicht. Weitere interessante Anwendungen und Projekte finden sich im Bereich von Web-Portalen, Suchmaschinen und (kommerziellen) Informationsdiensten, während sich etwa im Bibliothekswesen kaum nennenswertes Interesse an einer automatischen Klassifizierung von Büchern bzw. bibliographischen Datensätzen registrieren lässt. Die Studie schliesst mit einer Diskussion der wichtigsten Projekte und Anwendungen sowie einiger im Zusammenhang mit dem automatischen Klassifizieren relevanter Fragestellungen und Themen.
  20. Kaiser, U.: Handbuch Internet und Online Dienste : der kompetente Reiseführer für das digitale Netz (1996) 0.01
    0.0073251524 = product of:
      0.03662576 = sum of:
        0.03662576 = product of:
          0.07325152 = sum of:
            0.07325152 = weight(_text_:22 in 4589) [ClassicSimilarity], result of:
              0.07325152 = score(doc=4589,freq=2.0), product of:
                0.15777399 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505473 = queryNorm
                0.46428138 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Series
    Heyne Business; 22/1019

Years

Languages

  • e 23
  • d 21

Types

  • m 39
  • a 3
  • s 2
  • el 1
  • x 1
  • More… Less…

Classifications