Search (16 results, page 1 of 1)

  • × theme_ss:"Auszeichnungssprachen"
  1. Fiander, D. J.: Applying XML to the bibliographic description (2001) 0.02
    0.017793506 = product of:
      0.08303636 = sum of:
        0.02018744 = weight(_text_:classification in 5441) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5441,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5441)
        0.042661484 = weight(_text_:bibliographic in 5441) [ClassicSimilarity], result of:
          0.042661484 = score(doc=5441,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.3649729 = fieldWeight in 5441, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=5441)
        0.02018744 = weight(_text_:classification in 5441) [ClassicSimilarity], result of:
          0.02018744 = score(doc=5441,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 5441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=5441)
      0.21428572 = coord(3/14)
    
    Abstract
    Over the past few years there has been a significant amount of work in the area of cataloging internet resources, primarily using new metadata standards like the Dublin Core, but there has been little work on applying new data description formats like SGML and XML to traditional cataloging practices. What little work has been done in the area of using SGML and XML for traditional bibliographic description has primarily been based on the concept of converting MARC tagging into XML tagging. I suggest that, rather than attempting to convert existing MARC tagging into a new syntax based on SGML or XML, a more fruitful possibility is to return to the cataloging standards and describe their inherent structure, learning from how MARC has been used successfully in modern OPAC while attempting to avoid MARC's rigid field-based restrictions.
    Source
    Cataloging and classification quarterly. 33(2001) no.2, S.17-28
  2. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.01
    0.0109746875 = product of:
      0.07682281 = sum of:
        0.056881975 = weight(_text_:bibliographic in 3005) [ClassicSimilarity], result of:
          0.056881975 = score(doc=3005,freq=16.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.4866305 = fieldWeight in 3005, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.019940836 = product of:
          0.039881673 = sum of:
            0.039881673 = weight(_text_:texts in 3005) [ClassicSimilarity], result of:
              0.039881673 = score(doc=3005,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2422848 = fieldWeight in 3005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3005)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
  3. Lee, M.; Baillie, S.; Dell'Oro, J.: TML: a Thesaural Markpup Language (200?) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 1622) [ClassicSimilarity], result of:
          0.02018744 = score(doc=1622,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.02018744 = weight(_text_:classification in 1622) [ClassicSimilarity], result of:
          0.02018744 = score(doc=1622,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 1622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
      0.14285715 = coord(2/14)
    
    Abstract
    Thesauri are used to provide controlled vocabularies for resource classification. Their use can greatly assist document discovery because thesauri man date a consistent shared terminology for describing documents. A particular thesauras classifies documents according to an information community's needs. As a result, there are many different thesaural schemas. This has led to a proliferation of schema-specific thesaural systems. In our research, we exploit schematic regularities to design a generic thesaural ontology and specfiy it as a markup language. The language provides a common representational framework in which to encode the idiosyncrasies of specific thesauri. This approach has several advantages: it offers consistent syntax and semantics in which to express thesauri; it allows general purpose thesaural applications to leverage many thesauri; and it supports a single thesaural user interface by which information communities can consistently organise, score and retrieve electronic documents.
  4. Christophides, V.; Plexousakis, D.; Scholl, M.; Tourtounis, S.: On labeling schemes for the Semantic Web (2003) 0.00
    0.004551766 = product of:
      0.06372472 = sum of:
        0.06372472 = product of:
          0.12744944 = sum of:
            0.12744944 = weight(_text_:schemes in 3393) [ClassicSimilarity], result of:
              0.12744944 = score(doc=3393,freq=10.0), product of:
                0.16067243 = queryWeight, product of:
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.03002521 = queryNorm
                0.7932253 = fieldWeight in 3393, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.3512506 = idf(docFreq=569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3393)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper focuses on the optimization of the navigation through voluminous subsumption hierarchies of topics employed by Portal Catalogs like Netscape Open Directory (ODP). We advocate for the use of labeling schemes for modeling these hierarchies in order to efficiently answer queries such as subsumption check, descendants, ancestors or nearest common ancestor, which usually require costly transitive closure computations. We first give a qualitative comparison of three main families of schemes, namely bit vector, prefix and interval based schemes. We then show that two labeling schemes are good candidates for an efficient implementation of label querying using standard relational DBMS, namely, the Dewey Prefix scheme [6] and an Interval scheme by Agrawal, Borgida and Jagadish [1]. We compare their storage and query evaluation performance for the 16 ODP hierarchies using the PostgreSQL engine.
  5. Vanhoutte, E.; Branden, R. van den: Text Encoding Initiative (TEI) (2009) 0.00
    0.002848691 = product of:
      0.039881673 = sum of:
        0.039881673 = product of:
          0.079763345 = sum of:
            0.079763345 = weight(_text_:texts in 3889) [ClassicSimilarity], result of:
              0.079763345 = score(doc=3889,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.4845696 = fieldWeight in 3889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3889)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    The result of community efforts among computing humanists, the Text Encoding Initiative or TEI is the de facto standard for the encoding of texts in the humanities. This entry explains the historical context of the TEI, its fundamental principles, history, and organization.
  6. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.00
    0.0017956087 = product of:
      0.02513852 = sum of:
        0.02513852 = weight(_text_:bibliographic in 1467) [ClassicSimilarity], result of:
          0.02513852 = score(doc=1467,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 1467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
      0.071428575 = coord(1/14)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  7. as: XML: Extensible Markup Language : I: Was ist XML? (2001) 0.00
    0.0014528577 = product of:
      0.020340007 = sum of:
        0.020340007 = product of:
          0.040680014 = sum of:
            0.040680014 = weight(_text_:22 in 4950) [ClassicSimilarity], result of:
              0.040680014 = score(doc=4950,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.38690117 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4950)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    30. 3.2003 11:06:22
  8. Schröder, A.: Web der Zukunft : RDF - Der erste Schritt zum semantischen Web 0.00
    0.0011622861 = product of:
      0.016272005 = sum of:
        0.016272005 = product of:
          0.03254401 = sum of:
            0.03254401 = weight(_text_:22 in 1457) [ClassicSimilarity], result of:
              0.03254401 = score(doc=1457,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.30952093 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden.
  9. Patrick, D.A.: XML in der Praxis : Unternehmensübergreifende Vorteile durch Enterprise Content Management (1999) 0.00
    0.0010170004 = product of:
      0.014238005 = sum of:
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 1461) [ClassicSimilarity], result of:
              0.02847601 = score(doc=1461,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1461)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    30. 3.2003 10:50:22
  10. Trotman, A.: Searching structured documents (2004) 0.00
    0.0010170004 = product of:
      0.014238005 = sum of:
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
              0.02847601 = score(doc=2538,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 2538, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2538)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    14. 8.2004 10:39:22
  11. XML in libraries (2002) 0.00
    0.0010157496 = product of:
      0.014220494 = sum of:
        0.014220494 = weight(_text_:bibliographic in 3100) [ClassicSimilarity], result of:
          0.014220494 = score(doc=3100,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.121657625 = fieldWeight in 3100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.071428575 = coord(1/14)
    
    Footnote
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
  12. Learning XML (2003) 0.00
    0.0010157496 = product of:
      0.014220494 = sum of:
        0.014220494 = weight(_text_:bibliographic in 3101) [ClassicSimilarity], result of:
          0.014220494 = score(doc=3101,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.121657625 = fieldWeight in 3101, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
      0.071428575 = coord(1/14)
    
    Footnote
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
  13. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    0.0010157496 = product of:
      0.014220494 = sum of:
        0.014220494 = weight(_text_:bibliographic in 3102) [ClassicSimilarity], result of:
          0.014220494 = score(doc=3102,freq=4.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.121657625 = fieldWeight in 3102, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
      0.071428575 = coord(1/14)
    
    Footnote
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
  14. Ioannides, D.: XML schema languages : beyond DTD (2000) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 720) [ClassicSimilarity], result of:
              0.024408007 = score(doc=720,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    28. 1.2006 19:01:22
  15. Vonhoegen, H.: Einstieg in XML (2002) 0.00
    5.085002E-4 = product of:
      0.0071190023 = sum of:
        0.0071190023 = product of:
          0.014238005 = sum of:
            0.014238005 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.014238005 = score(doc=4002,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  16. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.00
    4.3585728E-4 = product of:
      0.0061020018 = sum of:
        0.0061020018 = product of:
          0.0122040035 = sum of:
            0.0122040035 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.0122040035 = score(doc=729,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 6.2005 15:12:11