Search (27 results, page 1 of 2)

  • × theme_ss:"Auszeichnungssprachen"
  1. Holzheid, G.: Dublin Core, SGML und die Zukunft der Erschließung am Beispiel einer Studie zur Optimierung des Dokumentenmanagements einer großen Nichtregierungsorganisation (2005) 0.02
    0.024325678 = product of:
      0.09730271 = sum of:
        0.02000671 = weight(_text_:23 in 2192) [ClassicSimilarity], result of:
          0.02000671 = score(doc=2192,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 2192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2192)
        0.02000671 = weight(_text_:23 in 2192) [ClassicSimilarity], result of:
          0.02000671 = score(doc=2192,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 2192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2192)
        0.015301661 = weight(_text_:und in 2192) [ClassicSimilarity], result of:
          0.015301661 = score(doc=2192,freq=8.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34282678 = fieldWeight in 2192, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2192)
        0.02000671 = weight(_text_:23 in 2192) [ClassicSimilarity], result of:
          0.02000671 = score(doc=2192,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.27719048 = fieldWeight in 2192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2192)
        0.021980919 = weight(_text_:der in 2192) [ClassicSimilarity], result of:
          0.021980919 = score(doc=2192,freq=16.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.4886365 = fieldWeight in 2192, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2192)
      0.25 = coord(5/20)
    
    Abstract
    Immer mehr Informationsobjekte werden in digitaler Form publiziert. In der vorliegenden Arbeit werden die Auswirkungen der Digitalisierung auf bibliothekarische Erschließungsmethoden dargestellt: einerseits die Funktion der Auszeichnungssprachen SGML, HTML und XML als Erschließungsinstrumente, andererseits der von bibliothekarischer Seite entwickelte Metadatenstandard Dublin Core. Am Praxisbeispiel "Dokumentenmanagement einer Nichtregierungsorganisation" wird untersucht, ob die Erschließung verbessert werden könnte durch z.B. Optimierung der Volltextindizierung, standardisierte Metadaten oder Integration von Metadaten und Suchmaschinentechnologie. Mithilfe einer Benutzerbefragung werden diese Ansätze an der Praxis gemessen und konkrete Empfehlungen entwickelt. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin. Online-Version: http://www.ib.hu-berlin.de/~kumlau/handreichungen/h136/.
    Date
    14. 2.2008 20:23:24
  2. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.011881853 = product of:
      0.05940927 = sum of:
        0.017508736 = weight(_text_:software in 3373) [ClassicSimilarity], result of:
          0.017508736 = score(doc=3373,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.017508736 = weight(_text_:software in 3373) [ClassicSimilarity], result of:
          0.017508736 = score(doc=3373,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.0068830615 = product of:
          0.013766123 = sum of:
            0.013766123 = weight(_text_:29 in 3373) [ClassicSimilarity], result of:
              0.013766123 = score(doc=3373,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.19432661 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
        0.017508736 = weight(_text_:software in 3373) [ClassicSimilarity], result of:
          0.017508736 = score(doc=3373,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.21915624 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
      0.2 = coord(4/20)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
    Date
    31. 1.2017 13:29:56
  3. Salminen, A.: Markup languages (2009) 0.01
    0.009454718 = product of:
      0.06303145 = sum of:
        0.021010485 = weight(_text_:software in 3849) [ClassicSimilarity], result of:
          0.021010485 = score(doc=3849,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 3849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3849)
        0.021010485 = weight(_text_:software in 3849) [ClassicSimilarity], result of:
          0.021010485 = score(doc=3849,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 3849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3849)
        0.021010485 = weight(_text_:software in 3849) [ClassicSimilarity], result of:
          0.021010485 = score(doc=3849,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 3849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3849)
      0.15 = coord(3/20)
    
    Abstract
    Current global communication of people and software applications over the Internet is facilitated by the use of markup languages. This entry introduces the principles and different types of markup, and the history behind the current markup languages. The basis of the modern markup languages is the Standard Generalized Markup Language (SGML) or its restricted form Extensible Markup Language (XML). This entry describes the markup techniques used in SGML and XML, gives examples of their use, and briefly describes some representative SGML and XML applications from different domains. An important factor in the success of XML has been the possibility to reuse markup vocabularies and combine vocabularies originating from different sources. This entry describes the concepts and methods facilitating the reuse of names from earlier defined vocabularies.
  4. Christophides, V.; Plexousakis, D.; Scholl, M.; Tourtounis, S.: On labeling schemes for the Semantic Web (2003) 0.01
    0.007716874 = product of:
      0.051445827 = sum of:
        0.017148608 = weight(_text_:23 in 3393) [ClassicSimilarity], result of:
          0.017148608 = score(doc=3393,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
        0.017148608 = weight(_text_:23 in 3393) [ClassicSimilarity], result of:
          0.017148608 = score(doc=3393,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
        0.017148608 = weight(_text_:23 in 3393) [ClassicSimilarity], result of:
          0.017148608 = score(doc=3393,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.15 = coord(3/20)
    
    Date
    4. 7.1997 18:38:23
  5. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.01
    0.0063031456 = product of:
      0.04202097 = sum of:
        0.014006989 = weight(_text_:software in 3005) [ClassicSimilarity], result of:
          0.014006989 = score(doc=3005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 3005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.014006989 = weight(_text_:software in 3005) [ClassicSimilarity], result of:
          0.014006989 = score(doc=3005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 3005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.014006989 = weight(_text_:software in 3005) [ClassicSimilarity], result of:
          0.014006989 = score(doc=3005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 3005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
      0.15 = coord(3/20)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
  6. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.01
    0.005515252 = product of:
      0.036768343 = sum of:
        0.012256115 = weight(_text_:software in 5903) [ClassicSimilarity], result of:
          0.012256115 = score(doc=5903,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.012256115 = weight(_text_:software in 5903) [ClassicSimilarity], result of:
          0.012256115 = score(doc=5903,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.012256115 = weight(_text_:software in 5903) [ClassicSimilarity], result of:
          0.012256115 = score(doc=5903,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.15340936 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.15 = coord(3/20)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
  7. Patrick, D.A.: XML in der Praxis : Unternehmensübergreifende Vorteile durch Enterprise Content Management (1999) 0.01
    0.0052692858 = product of:
      0.03512857 = sum of:
        0.015301661 = weight(_text_:und in 1461) [ClassicSimilarity], result of:
          0.015301661 = score(doc=1461,freq=8.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34282678 = fieldWeight in 1461, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1461)
        0.0134605095 = weight(_text_:der in 1461) [ClassicSimilarity], result of:
          0.0134605095 = score(doc=1461,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.29922754 = fieldWeight in 1461, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1461)
        0.006366401 = product of:
          0.019099202 = sum of:
            0.019099202 = weight(_text_:22 in 1461) [ClassicSimilarity], result of:
              0.019099202 = score(doc=1461,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.2708308 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1461)
          0.33333334 = coord(1/3)
      0.15 = coord(3/20)
    
    Abstract
    In dem Maße, in dem Unternehmen ihren Erfolg in einem zunehmend von Konkurrenz geprägten Weltmarkt suchen, ist ds Content Management als Informationslösung interessant geworden. Content Management-Systeme können dabei helfen, die enormen betrieblichen Investitionen in die Information zu verringern. Wie bei jeder neuartigen Technologie bestehen auch hier noch keine klaren Vorstellungen darüber, was ContentnManagement eigentlich ausmacht. In diesem Beitrag werden die Probleme und Technologien im Zusammenhang mit dem Content Management untersucht und der aktuelle Stand in Sachen Content Management beschrieben. Content Management ist mehr als nur eine neue Technologie. Im Kern erlaubt Content Management Unternehmen, Informationen zum aufbau intensiverer Beziehungen entlang der Wertschöpfungskette aufzubauen, wobei Kunden, Vertriebspartner, Zulieferer und Hersteller verbunden werden
    Date
    30. 3.2003 10:50:22
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.1, S.5-12
  8. Schröder, A.: Web der Zukunft : RDF - Der erste Schritt zum semantischen Web 0.01
    0.00525374 = product of:
      0.035024934 = sum of:
        0.0123656085 = weight(_text_:und in 1457) [ClassicSimilarity], result of:
          0.0123656085 = score(doc=1457,freq=4.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.27704588 = fieldWeight in 1457, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1457)
        0.015383439 = weight(_text_:der in 1457) [ClassicSimilarity], result of:
          0.015383439 = score(doc=1457,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.34197432 = fieldWeight in 1457, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=1457)
        0.007275887 = product of:
          0.02182766 = sum of:
            0.02182766 = weight(_text_:22 in 1457) [ClassicSimilarity], result of:
              0.02182766 = score(doc=1457,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.30952093 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
          0.33333334 = coord(1/3)
      0.15 = coord(3/20)
    
    Abstract
    Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden.
    Source
    XML Magazin und Web Services. 2003, H.1, S.40-43
  9. XML in libraries (2002) 0.00
    0.0046392865 = product of:
      0.023196433 = sum of:
        0.0070034945 = weight(_text_:software in 3100) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3100,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.0021859515 = weight(_text_:und in 3100) [ClassicSimilarity], result of:
          0.0021859515 = score(doc=3100,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.048975255 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.0070034945 = weight(_text_:software in 3100) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3100,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.0070034945 = weight(_text_:software in 3100) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3100,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.2 = coord(4/20)
    
    Content
    Sammelrezension mit: (1) The ABCs of XML: The Librarian's Guide to the eXtensible Markup Language. Norman Desmarais. Houston, TX: New Technology Press, 2000. 206 pp. $28.00. (ISBN: 0-9675942-0-0) und (2) Learning XML. Erik T. Ray. Sebastopol, CA: O'Reilly & Associates, 2003. 400 pp. $34.95. (ISBN: 0-596-00420-6)
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  10. Behme, H.; Mintert, S.: XML in der Praxis : Professionelles Web-Publishing mit der Extensible Markup Language (2000) 0.00
    0.003625589 = product of:
      0.03625589 = sum of:
        0.019056682 = weight(_text_:und in 1465) [ClassicSimilarity], result of:
          0.019056682 = score(doc=1465,freq=38.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.42695636 = fieldWeight in 1465, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=1465)
        0.017199209 = weight(_text_:der in 1465) [ClassicSimilarity], result of:
          0.017199209 = score(doc=1465,freq=30.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.3823389 = fieldWeight in 1465, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=1465)
      0.1 = coord(2/20)
    
    Abstract
    XML verändert das Web wie nichts zuvor. Das Buch erklärt Ihnen sowohl die Idee von XML als auch deren Anwendungsmöglichkeiten. Neben der vollständigen Spezifikation enthält es vor allem auch praktische Tipps für den Einsatz von XML im WWW. Jeder Web-Publisher, der die Beschränkungen von HTML ablegen möchte, findet in diesem Buch eine unentbehrliche Grundlage und Referenz. Erläuterungen und Tipps zu wichtigen Aspekten wie XSL, RDF und Linking Runden
    Content
    XML in der Praxis war das erste deutsche Buch zur damals noch neuen Metamarkierungssprache XML. Mit der zweiten, überarbeiteten Auflage knüpfen die Autoren Behme und Mintert nahtlos an die Professionalität der ersten Ausgabe an. Auf dem neusten Stand der vom W3C verabschiedeten Spezifikationen rund um XML und mit den schon aus der ersten Auflage gewohnten einführenden und dennoch tief gehenden Erklärungen und Beispielen ist XML in der Praxis eine größtenteils neu überarbeitete Wunschkiste für XML-Einsteiger und Profis, die den Namen "2., erweiterte Auflage" mehr als verdient. Inhaltlich wurden die schon vorhandenen Kapitel aktualisiert, der Einstieg zum Thema Dokumente, XML im Web und ein XML Quickstart und die erste DTD -- noch immer brillant. Neu sind Kapitel zu Namensräumen, XPath, ein erweiterter Abschnitt zum XML-Linking, ein Überblick über Stylesheet-Sprachen und XSL-Transformation. Beispiele en masse -- detailliert und ausführlich. Spannend ist ein Kapitel über XML und Apache samt Cocconing und auch XHTML und WML werden hinreichend angesprochen. Außerdem enthält das Buch wie schon zuvor die deutsche Übersetzung der XML 1.0 Spezifikation. Wer dann noch nicht genug hat und in die in die tiefsten Tiefen der Metamarkierungssprache abtauchen will, dem sein Professional XML empfohlen -- zuvor ist das XML in der Praxis -Buch jedoch verpflichtende Einstiegslektüre. XML ist der Nachbrenner, den das Web für seinen weiteren Flug in die Zukunft braucht und Behme und Mintert eröffnen den Blick in die Maschinerie, um die Technik verwenden zu können. Buch lesen und abheben -- Mark-Up war noch nie so grenzenlos!
  11. XML & Co : Die W3C-Spezifikationen für Dokumenten- und Datenarchitektur (2002) 0.00
    0.0034037405 = product of:
      0.034037404 = sum of:
        0.018025815 = weight(_text_:und in 197) [ClassicSimilarity], result of:
          0.018025815 = score(doc=197,freq=34.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.40386027 = fieldWeight in 197, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=197)
        0.01601159 = weight(_text_:der in 197) [ClassicSimilarity], result of:
          0.01601159 = score(doc=197,freq=26.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.35593814 = fieldWeight in 197, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=197)
      0.1 = coord(2/20)
    
    Abstract
    Das Buch ist vor allem für fortgeschrittene Programmierer und professionelle Webmaster geschrieben worden. Anfänger sollten sich zunächst mit der grundlegnden Syntax von XML & Co beschäftigen. Als Referenz und Ratgeber ist XML & Co in jeder Hinsicht zu empfehlen.
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.13 (T. Weitzel): "Die Standards der XML-Familie sind nun auch in deutscher Übersetzung und Kommentierung verfügbar. Unter der Leitung von Stefan Mintert, der 1998 bereits die XML-Spezifikation selbst übersetzte, ist zusammen mit einem Team namhafter XML-Spezialisten und mit Unterstützung des deutsch-österreichischen W3C-Büros eine Sammlung aller wichtigen Specs aus der XML-Familie bei Addison-Wesley erschienen. Genauer gesagt handelt es sich um die kommentierten und übersetzten Versionen der Standards aus der W3C Activity Domain Architecture/XML, und zwar, in chronologischer Reihenfolge, XML, Namensräume, Verknüpfen von Stylesheets mit XML-Dokumenten, XPath, XSLT, XML Schema 0-3 (Einführung, Strukturen und Datentypen), XLink, XML Base und XML Information Set. Ziel des Projekts "edition W3C" war es, den Inhalt der jeweiligen Spezifikationen besser verständlich zu machen und so auch einen Beitrag zur Verbreitung der Standards zu leisten. Daher liegen die Übersetzungen nicht nur in Buchform vor, sondern sind sie über das normative Original hinaus in unterschiedlichem Umfang durch die jeweiligen Experten kommentiert. Insbesondere XPath ist so ausführlich und liebevoll mit vielen Erläuterungen und Abbildungen versehen, dass eine analoge Behandlung etwa der Schema-Specs sicherlich den Rahmen des Buchs gesprengt hätte. Das erhoffte Feedback auf die unterschiedlichen Übersetzungen und Kommentierungen soll helfen, das geplante zweite Buch der edition W3C-Reihe zu XHTML und CSS an den Leserwünschen auszurichten. Interessant ist, dass das Buch selbst in XML erstellt wurde (XMLspec-DTD). Wie schon in "XML in der Praxis" (zusammen mit Henning Behme) hat Stefan Mintert für die edition W3C in lehrbuchmäßiger Cross-Media-Manier XML eingesetzt. So konnte beispielsweise das Stichwortverzeichnis (www.edition-w3c.de/ gesamtindex.html) direkt per Single Source aus den jeweiligen Spezifikationen generiert werden. Eine Besonderheit des 1999 initiierten Projekts liegt darin, dass es sich um die einzige durch das W3C legitimierte PrintPublikation überhaupt handelt, nachdem 1997 das WWW Journal (W3J) bei O'Reilly eingestellt wurde. Darüber hinaus ist die edition W3 C das einzige offizielle, also vom örtlichen W3C-Büro unterstützte, Übersetzungsprojekt. Bedenkt man, dass das W3C-neben Boston-Sitze in Frankreich und Japan hat, stellt sich die Frage, ob dort Übersetzungen nicht noch viel stärker nachgefragt werden könnten und somit auch zur weiteren Durchsetzung der XML-Familie beitragen könnten. Die Übersetzungen sind, allerdings ohne Kommentierungen, auch über die Website des Projekts bei www.edition-w3c.de/ verlinkt. Dort gibt es auch weitere Übersetzungen, etwa zu XHTML oder CSS L2."
  12. Learning XML (2003) 0.00
    0.0031515728 = product of:
      0.021010485 = sum of:
        0.0070034945 = weight(_text_:software in 3101) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3101,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.0070034945 = weight(_text_:software in 3101) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3101,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.0070034945 = weight(_text_:software in 3101) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3101,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
      0.15 = coord(3/20)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  13. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    0.0031515728 = product of:
      0.021010485 = sum of:
        0.0070034945 = weight(_text_:software in 3102) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3102,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
        0.0070034945 = weight(_text_:software in 3102) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3102,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
        0.0070034945 = weight(_text_:software in 3102) [ClassicSimilarity], result of:
          0.0070034945 = score(doc=3102,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.087662496 = fieldWeight in 3102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
      0.15 = coord(3/20)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  14. Michel, T.F.: XML kompakt : Eine praktische Einführung (1999) 0.00
    0.0030048136 = product of:
      0.030048136 = sum of:
        0.017487612 = weight(_text_:und in 1462) [ClassicSimilarity], result of:
          0.017487612 = score(doc=1462,freq=8.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.39180204 = fieldWeight in 1462, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1462)
        0.012560525 = weight(_text_:der in 1462) [ClassicSimilarity], result of:
          0.012560525 = score(doc=1462,freq=4.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.27922085 = fieldWeight in 1462, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=1462)
      0.1 = coord(2/20)
    
    Abstract
    XML stellt eine Basistechnologie der Informationsverarbeitung dar. Sie liefert die Regeln zur anwendungs-, system- und medienneutralen Definition von Informationstypen. Der Autor reduziert deshalb XML einschließlich seiner Drivate - XLink und XPointer - nicht auf Internet-Technologien. Vielmehr behandelt er XML als Grundlage für den Austausch, die Verwaltung und die Präsentation von Informationen in Netzwerken und Datenbanken
  15. Jackenkroll, M.: Sprache mit Potenzial : XML als Grundlage des Cross-Media-Publishing (2006) 0.00
    0.0028807095 = product of:
      0.028807094 = sum of:
        0.016394636 = weight(_text_:und in 1629) [ClassicSimilarity], result of:
          0.016394636 = score(doc=1629,freq=18.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.3673144 = fieldWeight in 1629, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1629)
        0.012412459 = weight(_text_:der in 1629) [ClassicSimilarity], result of:
          0.012412459 = score(doc=1629,freq=10.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.27592933 = fieldWeight in 1629, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1629)
      0.1 = coord(2/20)
    
    Abstract
    Die Extensible Markup Language (XML) ist eine Metaauszeichnungssprache, die 1998 vom World Wide Web Consortium (W3C) als neue Empfehlung für Web-Anwendungen festgesetzt wurde. In XML-Dokumenten werden die hierarchische Struktur und der Inhalt der Dokumente festgelegt, aber keinerlei Angaben zum Layout gemacht. Dieses wird in so genannten Stylesheets definiert. Auf dieser strikten Trennung von Struktur und Layout beruht das große Potenzial von XML im Hinblick auf das Cross-Media-Publishing: Mit Hilfe mehrerer Stylesheets, die sich alle auf ein XML-Dokument beziehen, ist es mit relativ geringem Aufwand möglich, aus einem Datenbestand verschiedene Ausgabeprodukte zu erzeugen.Einleitend gibt die Autorin einen kurzen Überblick über die Grundzüge, den Leistungsumfang und die Funktionalitäten von XML und einigen zugehörigen Spezifikationen, die im Zusammenhang mit XML entwickelt wurden. Nachfolgend wird der Themenkomplex der Informationsmittel, wie z.B. Lexika und Enzyklopädien behandelt. Schwerpunktmäßig wird hier dargestellt, wie XML heutzutage in Verlagen zur Publikation von Informationsmitteln eingesetzt wird und welche Vor- und Nachteile eine derartige Auszeichnung der Daten mit sich bringt. Aufbauend auf diesem theoretischen Teil wird am Beispiel eines geografischen Lexikonartikels praktisch demonstriert, wie sich aus einem einmal erfassten Datenbestand verschiedene Ausgabeprodukte generieren lassen. Ein lesenswertes Werk für alle Informatiker, Programmierer und Webdesigner und andere am Thema interessierte Leser, die mehr über diese Sprache erfahren wollen.
  16. Erbarth, M.: Wissensrepräsentation mit semantischen Netzen : Grundlagen mit einem Anwendungsbeispiel für das Multi-Channel-Publishing (2006) 0.00
    0.0028807095 = product of:
      0.028807094 = sum of:
        0.016394636 = weight(_text_:und in 714) [ClassicSimilarity], result of:
          0.016394636 = score(doc=714,freq=18.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.3673144 = fieldWeight in 714, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=714)
        0.012412459 = weight(_text_:der in 714) [ClassicSimilarity], result of:
          0.012412459 = score(doc=714,freq=10.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.27592933 = fieldWeight in 714, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=714)
      0.1 = coord(2/20)
    
    Abstract
    "Wir ertrinken in Informationen, aber uns dürstet nach Wissen." Trendforscher John Naisbitt drückt hiermit aus, dass es dem Menschen heute nicht mehr möglich ist die Informationsflut, die sich über ihn ergießt, effizient zu verwerten. Er lebt in einer globalisierten Welt mit einem vielfältigen Angebot an Medien, wie Presse, Radio, Fernsehen und dem Internet. Die Problematik der mangelnden Auswertbarkeit von großen Informationsmengen ist dabei vor allem im Internet akut. Die Quantität, Verbreitung, Aktualität und Verfügbarkeit sind die großen Vorteile des World Wide Web (WWW). Die Probleme liegen in der Qualität und Dichte der Informationen. Das Information Retrieval muss effizienter gestaltet werden, um den wirtschaftlichen und kulturellen Nutzen einer vernetzten Welt zu erhalten.Matthias Erbarth beleuchtet zunächst genau diesen Themenkomplex, um im Anschluss ein Format für elektronische Dokumente, insbesondere kommerzielle Publikationen, zu entwickeln. Dieses Anwendungsbeispiel stellt eine semantische Inhaltsbeschreibung mit Metadaten mittels XML vor, wobei durch Nutzung von Verweisen und Auswertung von Zusammenhängen insbesondere eine netzförmige Darstellung berücksichtigt wird.
    Classification
    AP 15860 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Formen der Kommunikation und des Kommunikationsdesigns / Kommunikationsdesign in elektronischen Medien
    RVK
    AP 15860 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Formen der Kommunikation und des Kommunikationsdesigns / Kommunikationsdesign in elektronischen Medien
  17. Stein, M.: Workshop XML (2001) 0.00
    0.002876217 = product of:
      0.02876217 = sum of:
        0.015301661 = weight(_text_:und in 1463) [ClassicSimilarity], result of:
          0.015301661 = score(doc=1463,freq=8.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34282678 = fieldWeight in 1463, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1463)
        0.0134605095 = weight(_text_:der in 1463) [ClassicSimilarity], result of:
          0.0134605095 = score(doc=1463,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.29922754 = fieldWeight in 1463, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1463)
      0.1 = coord(2/20)
    
    Abstract
    XML wird mit jedem Tag populärer. Als universelles Datenformat gewinnt es immer mehr an Bedeutung nicht nur für die Datenspeicherung, sondern auch bei der Übertragung von Informationen im Internet. Die Vielfalt der Möglichkeiten macht es jedoch schwierig, jeden Aspekt von XML zu überblicken. Aus diesem Grund wurde in diesem Buch Wert auf praxisnahes Know-How gelegt. Der Autor besitzt nicht nur umfangreiche Kenntnisse über XML, er ist auch ein erfahrener Webentwickler und kennt die Tücken des Programmierer-Alltags. Er hat ein Buch zusammengestellt, dass Ihnen mit praktischen Übungen und Tipps zu eleganten und sinnvollen Lösungen im Umgang mit XML verhilft. Vertiefen Sie Ihre XML-Kenntnisse. Lernen Sie anhand praktischer Übungen mehr über XML-Schema, Namespaces, XPath, XLink und XSLT.
  18. Bold, M.: ¬Die Zukunft des Web : Standards für das Web der Zukunft (2004) 0.00
    0.0026559052 = product of:
      0.026559051 = sum of:
        0.015457011 = weight(_text_:und in 1725) [ClassicSimilarity], result of:
          0.015457011 = score(doc=1725,freq=4.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34630734 = fieldWeight in 1725, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1725)
        0.01110204 = weight(_text_:der in 1725) [ClassicSimilarity], result of:
          0.01110204 = score(doc=1725,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.2467987 = fieldWeight in 1725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=1725)
      0.1 = coord(2/20)
    
    Abstract
    Neue Technologien und Standards sollen die Zukunft des Web prägen. Internet Professionell erklärt, was es mit XML, XSLT, XHTML, XPath und XLink auf sich hat
  19. as: XML: Extensible Markup Language : I: Was ist XML? (2001) 0.00
    0.0024551868 = product of:
      0.024551868 = sum of:
        0.015457011 = weight(_text_:und in 4950) [ClassicSimilarity], result of:
          0.015457011 = score(doc=4950,freq=4.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34630734 = fieldWeight in 4950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4950)
        0.009094859 = product of:
          0.027284576 = sum of:
            0.027284576 = weight(_text_:22 in 4950) [ClassicSimilarity], result of:
              0.027284576 = score(doc=4950,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.38690117 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4950)
          0.33333334 = coord(1/3)
      0.1 = coord(2/20)
    
    Abstract
    Was ist das - eine neue Programmiersprache für das Internet etwa? Und wofür braucht man das? Ob und wie Sie von XML profitieren können, erfahren Sie in unserem dreiteiligen Workshop
    Date
    30. 3.2003 11:06:22
  20. Geeb, F.: Lexikographische Informationsstrukturierung mit XML (2003) 0.00
    0.002402635 = product of:
      0.02402635 = sum of:
        0.015144716 = weight(_text_:und in 1842) [ClassicSimilarity], result of:
          0.015144716 = score(doc=1842,freq=6.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.33931053 = fieldWeight in 1842, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1842)
        0.008881632 = weight(_text_:der in 1842) [ClassicSimilarity], result of:
          0.008881632 = score(doc=1842,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.19743896 = fieldWeight in 1842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=1842)
      0.1 = coord(2/20)
    
    Abstract
    Die Metalexikographie erarbeitet Theorien und Modelle für die Strukturierung lexikographischer Informationen in der Form von Nachschlagewerken (gedruckt oder online). Mit dem Aufkommen von XML steht ein weiteres, besonders wirkungsvolles Werkzeug für die Darstellung dieser Strukturen zur Verfügung. Die lexikographische Auszeichnungssprache leXeML ist ein Versuch, die lexikographische Theoriebildung in ein konkretes und anwendbares Werkzeug zur Informationsstrukturierung umzusetzen.
    Source
    Information - Wissenschaft und Praxis. 54(2003) H.7, S.415-420