Search (17 results, page 1 of 1)

  • × theme_ss:"Auszeichnungssprachen"
  1. Salminen, A.: Markup languages (2009) 0.03
    0.03214779 = product of:
      0.11251726 = sum of:
        0.06562121 = weight(_text_:global in 3849) [ClassicSimilarity], result of:
          0.06562121 = score(doc=3849,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.3316104 = fieldWeight in 3849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.046875 = fieldNorm(doc=3849)
        0.046896048 = weight(_text_:ed in 3849) [ClassicSimilarity], result of:
          0.046896048 = score(doc=3849,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.33337396 = fieldWeight in 3849, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=3849)
      0.2857143 = coord(2/7)
    
    Abstract
    Current global communication of people and software applications over the Internet is facilitated by the use of markup languages. This entry introduces the principles and different types of markup, and the history behind the current markup languages. The basis of the modern markup languages is the Standard Generalized Markup Language (SGML) or its restricted form Extensible Markup Language (XML). This entry describes the markup techniques used in SGML and XML, gives examples of their use, and briefly describes some representative SGML and XML applications from different domains. An important factor in the success of XML has been the possibility to reuse markup vocabularies and combine vocabularies originating from different sources. This entry describes the concepts and methods facilitating the reuse of names from earlier defined vocabularies.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  2. Vanhoutte, E.; Branden, R. van den: Text Encoding Initiative (TEI) (2009) 0.01
    0.00893258 = product of:
      0.06252806 = sum of:
        0.06252806 = weight(_text_:ed in 3889) [ClassicSimilarity], result of:
          0.06252806 = score(doc=3889,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.4444986 = fieldWeight in 3889, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0625 = fieldNorm(doc=3889)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  3. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.007938774 = product of:
      0.055571415 = sum of:
        0.055571415 = weight(_text_:personal in 3373) [ClassicSimilarity], result of:
          0.055571415 = score(doc=3373,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.27857435 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
      0.14285715 = coord(1/7)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  4. Wusteman, J.: Document Type Definition (DTD) (2009) 0.01
    0.007816008 = product of:
      0.054712057 = sum of:
        0.054712057 = weight(_text_:ed in 3766) [ClassicSimilarity], result of:
          0.054712057 = score(doc=3766,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.38893628 = fieldWeight in 3766, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3766)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  5. Salminen, A.: Modeling documents in their context (2009) 0.01
    0.007816008 = product of:
      0.054712057 = sum of:
        0.054712057 = weight(_text_:ed in 3847) [ClassicSimilarity], result of:
          0.054712057 = score(doc=3847,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.38893628 = fieldWeight in 3847, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3847)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  6. Clarke, K.S.: Extensible Markup Language (XML) (2009) 0.01
    0.006699436 = product of:
      0.046896048 = sum of:
        0.046896048 = weight(_text_:ed in 3781) [ClassicSimilarity], result of:
          0.046896048 = score(doc=3781,freq=4.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.33337396 = fieldWeight in 3781, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
      0.14285715 = coord(1/7)
    
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  7. Marcoux, Y.; Rizkallah, E.: Knowledge organization in the light of intertextual semantics : a natural-language analysis of controlled vocabularies (2008) 0.00
    0.004737216 = product of:
      0.03316051 = sum of:
        0.03316051 = weight(_text_:ed in 2241) [ClassicSimilarity], result of:
          0.03316051 = score(doc=2241,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.23573098 = fieldWeight in 2241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=2241)
      0.14285715 = coord(1/7)
    
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  8. XML in libraries (2002) 0.00
    0.0044908486 = product of:
      0.03143594 = sum of:
        0.03143594 = weight(_text_:personal in 3100) [ClassicSimilarity], result of:
          0.03143594 = score(doc=3100,freq=4.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.15758546 = fieldWeight in 3100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.14285715 = coord(1/7)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  9. Learning XML (2003) 0.00
    0.0044908486 = product of:
      0.03143594 = sum of:
        0.03143594 = weight(_text_:personal in 3101) [ClassicSimilarity], result of:
          0.03143594 = score(doc=3101,freq=4.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.15758546 = fieldWeight in 3101, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
      0.14285715 = coord(1/7)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  10. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.00
    0.0044908486 = product of:
      0.03143594 = sum of:
        0.03143594 = weight(_text_:personal in 3102) [ClassicSimilarity], result of:
          0.03143594 = score(doc=3102,freq=4.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.15758546 = fieldWeight in 3102, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
      0.14285715 = coord(1/7)
    
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  11. as: XML: Extensible Markup Language : I: Was ist XML? (2001) 0.00
    0.0038283467 = product of:
      0.026798425 = sum of:
        0.026798425 = product of:
          0.05359685 = sum of:
            0.05359685 = weight(_text_:22 in 4950) [ClassicSimilarity], result of:
              0.05359685 = score(doc=4950,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.38690117 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4950)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    30. 3.2003 11:06:22
  12. Schröder, A.: Web der Zukunft : RDF - Der erste Schritt zum semantischen Web 0.00
    0.003062677 = product of:
      0.021438738 = sum of:
        0.021438738 = product of:
          0.042877477 = sum of:
            0.042877477 = weight(_text_:22 in 1457) [ClassicSimilarity], result of:
              0.042877477 = score(doc=1457,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.30952093 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden.
  13. Patrick, D.A.: XML in der Praxis : Unternehmensübergreifende Vorteile durch Enterprise Content Management (1999) 0.00
    0.0026798425 = product of:
      0.018758897 = sum of:
        0.018758897 = product of:
          0.037517793 = sum of:
            0.037517793 = weight(_text_:22 in 1461) [ClassicSimilarity], result of:
              0.037517793 = score(doc=1461,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.2708308 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1461)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    30. 3.2003 10:50:22
  14. Trotman, A.: Searching structured documents (2004) 0.00
    0.0026798425 = product of:
      0.018758897 = sum of:
        0.018758897 = product of:
          0.037517793 = sum of:
            0.037517793 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
              0.037517793 = score(doc=2538,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.2708308 = fieldWeight in 2538, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2538)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    14. 8.2004 10:39:22
  15. Ioannides, D.: XML schema languages : beyond DTD (2000) 0.00
    0.0022970077 = product of:
      0.016079053 = sum of:
        0.016079053 = product of:
          0.032158107 = sum of:
            0.032158107 = weight(_text_:22 in 720) [ClassicSimilarity], result of:
              0.032158107 = score(doc=720,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.23214069 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    28. 1.2006 19:01:22
  16. Vonhoegen, H.: Einstieg in XML (2002) 0.00
    0.0013399213 = product of:
      0.009379448 = sum of:
        0.009379448 = product of:
          0.018758897 = sum of:
            0.018758897 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.018758897 = score(doc=4002,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  17. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.00
    0.0011485039 = product of:
      0.008039527 = sum of:
        0.008039527 = product of:
          0.016079053 = sum of:
            0.016079053 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.016079053 = score(doc=729,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 6.2005 15:12:11