Search (41 results, page 1 of 3)

  • × theme_ss:"Auszeichnungssprachen"
  1. Trotman, A.: Searching structured documents (2004) 0.03
    0.029918438 = product of:
      0.074796095 = sum of:
        0.011678694 = weight(_text_:a in 2538) [ClassicSimilarity], result of:
          0.011678694 = score(doc=2538,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 2538, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2538)
        0.0631174 = sum of:
          0.019141505 = weight(_text_:information in 2538) [ClassicSimilarity], result of:
            0.019141505 = score(doc=2538,freq=6.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23515764 = fieldWeight in 2538, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
          0.043975897 = weight(_text_:22 in 2538) [ClassicSimilarity], result of:
            0.043975897 = score(doc=2538,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 2538, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2538)
      0.4 = coord(2/5)
    
    Abstract
    Structured document interchange formats such as XML and SGML are ubiquitous, however, information retrieval systems supporting structured searching are not. Structured searching can result in increased precision. A search for the author "Smith" in an unstructured corpus of documents specializing in iron-working could have a lower precision than a structured search for "Smith as author" in the same corpus. Analysis of XML retrieval languages identifies additional functionality that must be supported including searching at, and broken across multiple nodes in the document tree. A data structure is developed to support structured document searching. Application of this structure to information retrieval is then demonstrated. Document ranking is examined and adapted specifically for structured searching.
    Date
    14. 8.2004 10:39:22
    Source
    Information processing and management. 40(2004) no.4, S.619-632
    Type
    a
  2. Patrick, D.A.: XML in der Praxis : Unternehmensübergreifende Vorteile durch Enterprise Content Management (1999) 0.03
    0.027154082 = product of:
      0.067885205 = sum of:
        0.004767807 = weight(_text_:a in 1461) [ClassicSimilarity], result of:
          0.004767807 = score(doc=1461,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 1461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1461)
        0.0631174 = sum of:
          0.019141505 = weight(_text_:information in 1461) [ClassicSimilarity], result of:
            0.019141505 = score(doc=1461,freq=6.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.23515764 = fieldWeight in 1461, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1461)
          0.043975897 = weight(_text_:22 in 1461) [ClassicSimilarity], result of:
            0.043975897 = score(doc=1461,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 1461, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1461)
      0.4 = coord(2/5)
    
    Abstract
    In dem Maße, in dem Unternehmen ihren Erfolg in einem zunehmend von Konkurrenz geprägten Weltmarkt suchen, ist ds Content Management als Informationslösung interessant geworden. Content Management-Systeme können dabei helfen, die enormen betrieblichen Investitionen in die Information zu verringern. Wie bei jeder neuartigen Technologie bestehen auch hier noch keine klaren Vorstellungen darüber, was ContentnManagement eigentlich ausmacht. In diesem Beitrag werden die Probleme und Technologien im Zusammenhang mit dem Content Management untersucht und der aktuelle Stand in Sachen Content Management beschrieben. Content Management ist mehr als nur eine neue Technologie. Im Kern erlaubt Content Management Unternehmen, Informationen zum aufbau intensiverer Beziehungen entlang der Wertschöpfungskette aufzubauen, wobei Kunden, Vertriebspartner, Zulieferer und Hersteller verbunden werden
    Date
    30. 3.2003 10:50:22
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.1, S.5-12
    Theme
    Information Resources Management
    Type
    a
  3. XML in libraries (2002) 0.03
    0.025292888 = product of:
      0.04215481 = sum of:
        0.006811153 = weight(_text_:a in 3100) [ClassicSimilarity], result of:
          0.006811153 = score(doc=3100,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 3100, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.031813435 = weight(_text_:91 in 3100) [ClassicSimilarity], result of:
          0.031813435 = score(doc=3100,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
        0.0035302248 = product of:
          0.0070604496 = sum of:
            0.0070604496 = weight(_text_:information in 3100) [ClassicSimilarity], result of:
              0.0070604496 = score(doc=3100,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0867392 = fieldWeight in 3100, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  4. Learning XML (2003) 0.03
    0.025292888 = product of:
      0.04215481 = sum of:
        0.006811153 = weight(_text_:a in 3101) [ClassicSimilarity], result of:
          0.006811153 = score(doc=3101,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 3101, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.031813435 = weight(_text_:91 in 3101) [ClassicSimilarity], result of:
          0.031813435 = score(doc=3101,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 3101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=3101)
        0.0035302248 = product of:
          0.0070604496 = sum of:
            0.0070604496 = weight(_text_:information in 3101) [ClassicSimilarity], result of:
              0.0070604496 = score(doc=3101,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0867392 = fieldWeight in 3101, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3101)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  5. ¬The ABCs of XML : the librarian's guide to the eXtensible Markup Language (2000) 0.03
    0.025292888 = product of:
      0.04215481 = sum of:
        0.006811153 = weight(_text_:a in 3102) [ClassicSimilarity], result of:
          0.006811153 = score(doc=3102,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 3102, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
        0.031813435 = weight(_text_:91 in 3102) [ClassicSimilarity], result of:
          0.031813435 = score(doc=3102,freq=2.0), product of:
            0.25837386 = queryWeight, product of:
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.046368346 = queryNorm
            0.123129465 = fieldWeight in 3102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5722036 = idf(docFreq=456, maxDocs=44218)
              0.015625 = fieldNorm(doc=3102)
        0.0035302248 = product of:
          0.0070604496 = sum of:
            0.0070604496 = weight(_text_:information in 3102) [ClassicSimilarity], result of:
              0.0070604496 = score(doc=3102,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0867392 = fieldWeight in 3102, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3102)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Chapter 4 introduces XML internal and external pointing and linking technologies. XML Link Language (XLL, now XLink) provides unidirectional, multi-ended, and typed linking. XPointer, used with XLink, provides addressing into the interior of XML documents. XPath operates an the logical structure of an XML document, creating a tree of nodes. Used with both XPointer and XSLT, it permits operations an strings, numbers, and Boolean expressions in the document. The final chapter, "Getting Started" argues for the adoption of a tool for XML production. The features and functionality of various tools for content development, application development, databases, and schema development provide an introduction to some of the available options. Roy Tennant is weIl known in the library community as an author (bis column "Digital Libraries" has appeared in Library Journal since 1997 and he has published Current Cites each month for more than a decade), an electronic discussion list manager (Web4Lib and XML4Lib), and as the creator and manager of UC/Berkeley's Digital Library SunSITE. Librarians have wondered what use they might make of XML since its beginnings. Tennant suggests one answer: "The Extensible Markup Language (XML) has the potential to exceed the impact of MARC an librarianship. While MARC is limited to bibliographic descriptionand arguably a subset at that, as any archivist will tell you-XML provides a highly-effective framework for encoding anything from a bibliographic record for a book to the book itself." (Tennant, p. vii) This slim paperback volume offers librarians and library managers concerned with automation projects "show and teIl" examples of XML technologies used as solutions to everyday tasks and challenges. What distinguishes this work is the editor and contributors' commitment to providing messy details. This book's target audience is technically savvy. While not a "cookbook" per se, the information provided an each project serves as a draft blueprint complete with acronyms and jargon. The inclusion of "lessons learned" (including failures as well as successes) is refreshing and commendable. Experienced IT and automation project veterans will appreciate the technical specifics more fully than the general reader.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  6. as: XML: Extensible Markup Language : I: Was ist XML? (2001) 0.02
    0.015289003 = product of:
      0.038222507 = sum of:
        0.0068111527 = weight(_text_:a in 4950) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=4950,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 4950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4950)
        0.031411353 = product of:
          0.06282271 = sum of:
            0.06282271 = weight(_text_:22 in 4950) [ClassicSimilarity], result of:
              0.06282271 = score(doc=4950,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38690117 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4950)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    30. 3.2003 11:06:22
    Type
    a
  7. Schröder, A.: Web der Zukunft : RDF - Der erste Schritt zum semantischen Web 0.01
    0.013134009 = product of:
      0.03283502 = sum of:
        0.00770594 = weight(_text_:a in 1457) [ClassicSimilarity], result of:
          0.00770594 = score(doc=1457,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 1457, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1457)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 1457) [ClassicSimilarity], result of:
              0.050258167 = score(doc=1457,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 1457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1457)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden.
    Type
    a
  8. Ioannides, D.: XML schema languages : beyond DTD (2000) 0.01
    0.010370068 = product of:
      0.02592517 = sum of:
        0.007078358 = weight(_text_:a in 720) [ClassicSimilarity], result of:
          0.007078358 = score(doc=720,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 720, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=720)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 720) [ClassicSimilarity], result of:
              0.037693623 = score(doc=720,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The flexibility and extensibility of XML have largely contributed to its wide acceptance beyond the traditional realm of SGML. Yet, there is still one more obstacle to be overcome before XML is able to become the evangelized universal data/document format. The obstacle is posed by the limitations of the legacy standard for constraining the contents of an XML document. The traditionally used DTD (document type definition) format does not lend itself to be used in the wide variety of applications XML is capable of handling. The World Wide Web Consortium (W3C) has charged the XML schema working group with the task of developing a schema language to replace DTD. This XML schema language is evolving based on early drafts of XML schema languages. Each one of these early efforts adopted a slightly different approach, but all of them were moving in the same direction.
    Date
    28. 1.2006 19:01:22
    Type
    a
  9. Lee, M.; Baillie, S.; Dell'Oro, J.: TML: a Thesaural Markpup Language (200?) 0.01
    0.007583283 = product of:
      0.018958207 = sum of:
        0.012260076 = weight(_text_:a in 1622) [ClassicSimilarity], result of:
          0.012260076 = score(doc=1622,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22931081 = fieldWeight in 1622, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1622)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1622) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1622,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1622, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1622)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Thesauri are used to provide controlled vocabularies for resource classification. Their use can greatly assist document discovery because thesauri man date a consistent shared terminology for describing documents. A particular thesauras classifies documents according to an information community's needs. As a result, there are many different thesaural schemas. This has led to a proliferation of schema-specific thesaural systems. In our research, we exploit schematic regularities to design a generic thesaural ontology and specfiy it as a markup language. The language provides a common representational framework in which to encode the idiosyncrasies of specific thesauri. This approach has several advantages: it offers consistent syntax and semantics in which to express thesauri; it allows general purpose thesaural applications to leverage many thesauri; and it supports a single thesaural user interface by which information communities can consistently organise, score and retrieve electronic documents.
  10. Chang, M.: ¬An electronic finding aid using extensible markup language (XML) and encoded archival description (EAD) (2000) 0.01
    0.0073902505 = product of:
      0.018475626 = sum of:
        0.010661141 = weight(_text_:a in 4886) [ClassicSimilarity], result of:
          0.010661141 = score(doc=4886,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 4886, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4886)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 4886) [ClassicSimilarity], result of:
              0.015628971 = score(doc=4886,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 4886, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4886)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Increasingly, XML applications are appearing on the World Wide Web, from e-commerce to information management. In the case of libraries and archives, XML enables more flexible information management and retrieval than using MARC or a relational database management system. Describes a project to explore the use of XML and the EAD, and the development of a prototype electronic finding aid. It focuses on the technical aspects, and reviews the options available and the choices made. This is done within the setting of a small- to medium-sized archive with minimal tools and resources.
    Type
    a
  11. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.01
    0.007058388 = product of:
      0.01764597 = sum of:
        0.008173384 = weight(_text_:a in 3918) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3918,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 3918) [ClassicSimilarity], result of:
              0.018945174 = score(doc=3918,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 3918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3918)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Type
    a
  12. Pharo, N.: ¬The effect of granularity and order in XML element retrieval (2008) 0.01
    0.006654713 = product of:
      0.016636781 = sum of:
        0.00770594 = weight(_text_:a in 2118) [ClassicSimilarity], result of:
          0.00770594 = score(doc=2118,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 2118, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2118)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 2118) [ClassicSimilarity], result of:
              0.017861681 = score(doc=2118,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 2118, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2118)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The article presents an analysis of the effect of granularity and order in an XML encoded collection of full text journal articles. Two-hundred and eighteen sessions of searchers performing simulated work tasks in the collection have been analysed. The results show that searchers prefer to use smaller sections of the article as their source of information. In interaction sessions during which articles are assessed, however, they are to a large degree evaluated as more important than the articles' sections and subsections.
    Source
    Information processing and management. 44(2008) no.5, S.1732-1740
    Type
    a
  13. Clarke, K.S.: Extensible Markup Language (XML) (2009) 0.01
    0.006550755 = product of:
      0.016376887 = sum of:
        0.008173384 = weight(_text_:a in 3781) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3781,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3781, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 3781) [ClassicSimilarity], result of:
              0.016407004 = score(doc=3781,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 3781, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3781)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    XML, the Extensible Markup Language is a syntax for tagging, or marking-up, textual information. It is a standard, established by the World Wide Web Consortium (W3C) that many use when sharing or working with structured information. XML isn't used by itself, but as a tool to create other data-specific markup languages. One benefit to using XML is that it enables these languages to distinguish the content that is being marked up from its presentation, allowing for greater flexibility and data reuse. The library community has embraced XML and uses it as the foundation for many of their own data-specific markup languages. Perhaps the greatest strength of XML is that it is very easy to start working with and yet, in conjunction with many other XML-related standards and technologies, can also be used to develop complex applications.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
    Type
    a
  14. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.01
    0.0060935332 = product of:
      0.015233833 = sum of:
        0.008173384 = weight(_text_:a in 3005) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3005,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3005, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
        0.0070604496 = product of:
          0.014120899 = sum of:
            0.014120899 = weight(_text_:information in 3005) [ClassicSimilarity], result of:
              0.014120899 = score(doc=3005,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1734784 = fieldWeight in 3005, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3005)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
    Source
    Librarianship in the information age: Proceedings of the 13th BOBCATSSS Symposium, 31 January - 2 February 2005 in Budapest, Hungary. Eds.: Marte Langeland u.a
    Type
    a
  15. Peis, E.; Moya, F. de; Fernández-Molina, J.C.: Encoded archival description (EAD) conversion : a methodological proposal (2000) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 4899) [ClassicSimilarity], result of:
          0.009535614 = score(doc=4899,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 4899, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4899)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 4899) [ClassicSimilarity], result of:
              0.011051352 = score(doc=4899,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 4899, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4899)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The eventual adaptation of archives to new technological possibilities could begin with the creation of digital versions of archival finding aids, which would allow the international diffusion of descriptive information. The Standard Generalized Markup Language (SGML), document type definition (DTD) for archival description known as encoded archival description (EAD) is an appropriate tool for this purpose. Presents a methodological strategy that begins with an analysis of EAD and the informational object to be marked up, allowing the semiautomatic creation of a digital version.
    Type
    a
  16. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.01
    0.0058368337 = product of:
      0.014592084 = sum of:
        0.009010308 = weight(_text_:a in 3373) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3373,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3373, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 3373) [ClassicSimilarity], result of:
              0.011163551 = score(doc=3373,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 3373, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
    Type
    a
  17. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.01
    0.005719375 = product of:
      0.014298437 = sum of:
        0.010391194 = weight(_text_:a in 5903) [ClassicSimilarity], result of:
          0.010391194 = score(doc=5903,freq=38.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19435552 = fieldWeight in 5903, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 5903) [ClassicSimilarity], result of:
              0.007814486 = score(doc=5903,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 5903, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5903)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
    Content
    RDF Data Model At the core of RDF is a model for representing named properties and their values. These properties serve both to represent attributes of resources (and in this sense correspond to usual attribute-value-pairs) and to represent relationships between resources. The RDF data model is a syntax-independent way of representing RDF statements. RDF statements that are syntactically very different could mean the same thing. This concept of equivalence in meaning is very important when performing queries, aggregation and a number of other tasks at which RDF is aimed. The equivalence is defined in a clean machine understandable way. Two pieces of RDF are equivalent if and only if their corresponding data model representations are the same. Table of contents 1. Introduction 2. RDF Data Model 3. RDF Grammar 4. Signed RDF 5. Examples 6. Appendix A: Brief Explanation of XML Namespaces
  18. Salminen, A.: Modeling documents in their context (2009) 0.01
    0.005513504 = product of:
      0.01378376 = sum of:
        0.008258085 = weight(_text_:a in 3847) [ClassicSimilarity], result of:
          0.008258085 = score(doc=3847,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 3847, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3847)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 3847) [ClassicSimilarity], result of:
              0.011051352 = score(doc=3847,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 3847, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3847)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This entry describes notions and methods for analyzing and modeling documents in an organizational context. A model for the analysis process is provided and methods for data gathering, modeling, and user needs analysis described. The methods have been originally developed and tested during document standardization activities carried out in the Finnish Parliament and ministries. Later the methods have been adopted and adapted in other Finnish organizations in their document management development projects. The methods are intended especially for cases where the goal is to develop an Extensible Markup Language (XML)-based solution for document management. This entry emphasizes the importance of analyzing and describing documents in their organizational context.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
    Type
    a
  19. Miller, D.R.; Clarke, K.S.: Putting XML to work in the library : tools for improving access and management (2004) 0.01
    0.0055105956 = product of:
      0.013776489 = sum of:
        0.007078358 = weight(_text_:a in 1438) [ClassicSimilarity], result of:
          0.007078358 = score(doc=1438,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 1438, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1438)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1438) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1438,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1438, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1438)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The authors, hoping to stimulate interest in XML (Extensible Markup Language) and explain its value to the library community, offer a fine introduction to the topic. The opening chapter defines XML as "a system for electronically tagging or marking up documents in order to label, organize, and categorize their content" and then goes on to describe its origins and fundamental building blocks. Subsequent chapters address related technologies, schema development, XML-based tools, and current and future library uses. The authors argue persuasively for increased XML use, emphasizing its advantages over HTML in flexibility, interoperability, extensibility, and internationalization. Information is detailed, deftly written, and supported by numerous examples. Readers without a technological bent may find the text daunting, but their perseverance will be richly rewarded. Particularly recommended for webmasters and those working in library information systems and technical services.
  20. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.01
    0.0054589617 = product of:
      0.0136474045 = sum of:
        0.0068111527 = weight(_text_:a in 1467) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1467,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1467, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 1467) [ClassicSimilarity], result of:
              0.013672504 = score(doc=1467,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 1467, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1467)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized

Languages

  • e 27
  • d 13

Types