Search (96 results, page 1 of 5)

  • × theme_ss:"Datenformate"
  1. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.05
    0.049018458 = product of:
      0.098036915 = sum of:
        0.06504348 = weight(_text_:web in 133) [ClassicSimilarity], result of:
          0.06504348 = score(doc=133,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.40312994 = fieldWeight in 133, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
        0.032993436 = weight(_text_:search in 133) [ClassicSimilarity], result of:
          0.032993436 = score(doc=133,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: Vgl.: http://www.semantic-web-journal.net/content/supporting-multilingual-bibliographic-resource-discovery-functional-requirements-bibliograph http://www.semantic-web-journal.net/sites/default/files/swj145_2.pdf.
    Source
    Semantic Web journal. 3(2012) no.1, S.3-21
  2. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.04
    0.042216495 = product of:
      0.08443299 = sum of:
        0.046541322 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
          0.046541322 = score(doc=2845,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.037891667 = product of:
          0.075783335 = sum of:
            0.075783335 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.075783335 = score(doc=2845,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  3. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.04
    0.03706527 = product of:
      0.07413054 = sum of:
        0.041137107 = weight(_text_:web in 5172) [ClassicSimilarity], result of:
          0.041137107 = score(doc=5172,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 5172, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
        0.032993436 = weight(_text_:search in 5172) [ClassicSimilarity], result of:
          0.032993436 = score(doc=5172,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 5172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
      0.5 = coord(2/4)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
  4. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.03
    0.032083966 = product of:
      0.06416793 = sum of:
        0.04072366 = weight(_text_:web in 2837) [ClassicSimilarity], result of:
          0.04072366 = score(doc=2837,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
              0.046888545 = score(doc=2837,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 2837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2837)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  5. Mönch, C.; Aalberg, T.: Automatic conversion from MARC to FRBR (2003) 0.03
    0.03170284 = product of:
      0.06340568 = sum of:
        0.046659768 = weight(_text_:search in 2422) [ClassicSimilarity], result of:
          0.046659768 = score(doc=2422,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 2422, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2422)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 2422) [ClassicSimilarity], result of:
              0.03349182 = score(doc=2422,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 2422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2422)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Catalogs have for centuries been the main tool that enabled users to search for items in a library by author, title, or subject. A catalog can be interpreted as a set of bibliographic records, where each record acts as a surrogate for a publication. Every record describes a specific publication and contains the data that is used to create the indexes of search systems and the information that is presented to the user. Bibliographic records are often captured and exchanged by the use of the MARC format. Although there are numerous rdquodialectsrdquo of the MARC format in use, they are usually crafted on the same basis and are interoperable with each other -to a certain extent. The data model of a MARC-based catalog, however, is rdquo[...] extremely non-normalized with excessive replication of datardquo [1]. For instance, a literary work that exists in numerous editions and translations is likely to yield a large result set because each edition or translation is represented by an individual record, that is unrelated to other records that describe the same work.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  6. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.03
    0.02750054 = product of:
      0.05500108 = sum of:
        0.03490599 = weight(_text_:web in 3841) [ClassicSimilarity], result of:
          0.03490599 = score(doc=3841,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
              0.04019018 = score(doc=3841,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 3841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3841)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
  7. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.03
    0.026209105 = product of:
      0.05241821 = sum of:
        0.029088326 = weight(_text_:web in 1166) [ClassicSimilarity], result of:
          0.029088326 = score(doc=1166,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 1166, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.023329884 = weight(_text_:search in 1166) [ClassicSimilarity], result of:
          0.023329884 = score(doc=1166,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.13576864 = fieldWeight in 1166, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.5 = coord(2/4)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
  8. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.02
    0.01781289 = product of:
      0.07125156 = sum of:
        0.07125156 = weight(_text_:web in 1467) [ClassicSimilarity], result of:
          0.07125156 = score(doc=1467,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4416067 = fieldWeight in 1467, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
      0.25 = coord(1/4)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  9. Holt, B.: Presentation of UNIMARC on the Web : new fields, including the one for electronic resources (1999) 0.02
    0.017452994 = product of:
      0.06981198 = sum of:
        0.06981198 = weight(_text_:web in 6020) [ClassicSimilarity], result of:
          0.06981198 = score(doc=6020,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 6020, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6020)
      0.25 = coord(1/4)
    
  10. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.02
    0.017452994 = product of:
      0.06981198 = sum of:
        0.06981198 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
          0.06981198 = score(doc=3918,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
      0.25 = coord(1/4)
    
  11. Michard, A.; Pham Dac, D.: Description of collections and encyclopedias on the Web using XML (1998) 0.02
    0.016454842 = product of:
      0.06581937 = sum of:
        0.06581937 = weight(_text_:web in 3493) [ClassicSimilarity], result of:
          0.06581937 = score(doc=3493,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 3493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3493)
      0.25 = coord(1/4)
    
    Abstract
    Cataloguing artworks relies on the availability of classification schemes, often represented by hierarchical thesauri. Comments on the limitations of current practices and tools and proposes a new approach for the cooperative production of multilingual and multicultural classification schemes exploiting some features of the oncoming Extensible Markup Language based Web
  12. Scholz, M.: Wie können Daten im Web mit JSON nachgenutzt werden? (2023) 0.02
    0.016454842 = product of:
      0.06581937 = sum of:
        0.06581937 = weight(_text_:web in 5345) [ClassicSimilarity], result of:
          0.06581937 = score(doc=5345,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 5345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
      0.25 = coord(1/4)
    
    Abstract
    Martin Scholz ist Informatiker an der Universitätsbibliothek Erlangen-Nürnberg. Als Leiter der dortigen Gruppe Digitale Entwicklung und Datenmanagement beschäftigt er sich viel mit Webtechniken und Datentransformation. Er setzt sich mit der aktuellen ABI-Techik-Frage auseinander: Wie können Daten im Web mit JSON nachgenutzt werden?
  13. Eversberg, B.: Was sind und was sollen bibliothekarische Datenformate (1994) 0.01
    0.014544163 = product of:
      0.05817665 = sum of:
        0.05817665 = weight(_text_:web in 1742) [ClassicSimilarity], result of:
          0.05817665 = score(doc=1742,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 1742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1742)
      0.25 = coord(1/4)
    
    Footnote
    Neuere Ausgaben nur über die Web-Seite: http://www.allegro-c.de/allegro/formate/formate.htm
  14. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.01
    0.014397987 = product of:
      0.05759195 = sum of:
        0.05759195 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.05759195 = score(doc=5896,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 5896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.25 = coord(1/4)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
  15. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.01
    0.014397987 = product of:
      0.05759195 = sum of:
        0.05759195 = weight(_text_:web in 5423) [ClassicSimilarity], result of:
          0.05759195 = score(doc=5423,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 5423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
      0.25 = coord(1/4)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  16. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.01
    0.014209376 = product of:
      0.056837503 = sum of:
        0.056837503 = product of:
          0.113675006 = sum of:
            0.113675006 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.113675006 = score(doc=5743,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  17. Kernernman, V.Y.; Koenig, M.E.D.: USMARC as a standardized format for the Internet hypermedia document control/retrieval/delivery system design (1996) 0.01
    0.013677692 = product of:
      0.05471077 = sum of:
        0.05471077 = product of:
          0.10942154 = sum of:
            0.10942154 = weight(_text_:engine in 5565) [ClassicSimilarity], result of:
              0.10942154 = score(doc=5565,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.41372913 = fieldWeight in 5565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5565)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Surveys how the USMARC integrated bibliographic format (UBIF) could be mapped onto an hypermedia document USMARC format (HDUF) to meet the requirements of a hypermedia document control/retrieval/delivery (HDRD) system for the Internet. Explores the characteristics of such a system using an example of the WWW's directory and searching engine Yahoo!. Discusses additional standard specifications for the UBIF's structure, content designation, and data content to map this format into the HDUF that can serve as a proxy for the Net HDRD system
  18. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.01
    0.013396727 = product of:
      0.053586908 = sum of:
        0.053586908 = product of:
          0.107173815 = sum of:
            0.107173815 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.107173815 = score(doc=2840,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Library hi tech. 22(2004) no.1
  19. Postlkethwaite, B.: LITA MARC Holdings Interest Group, American Library Association Conference, new Orleans, June 1993 (1994) 0.01
    0.0131973745 = product of:
      0.052789498 = sum of:
        0.052789498 = weight(_text_:search in 859) [ClassicSimilarity], result of:
          0.052789498 = score(doc=859,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 859, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=859)
      0.25 = coord(1/4)
    
    Abstract
    Discusses standards related to the USMARC holdings format. Considers issues of concern surrounding the following standards: Z39.71, the proposed standard for holdings statements for bibliographic items; Z39.50, the standard for intersystem search and retrieval; and X12, the national standard for the transmission of business data. Aslo discusses the relationship between EDI and the USMARC holdings format. Work is currently in progress to update the holdings format
  20. Mueller, C.J.; Whittaker, M.A.: What is this thing called MARC(S)? (1990) 0.01
    0.0131973745 = product of:
      0.052789498 = sum of:
        0.052789498 = weight(_text_:search in 3588) [ClassicSimilarity], result of:
          0.052789498 = score(doc=3588,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 3588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=3588)
      0.25 = coord(1/4)
    
    Abstract
    Contribution to an issue devoted to serials and reference services. Familiarity with the basic elements of the MARC format and their effect on the display and retrieval of bibliographic data is an essential element of public service in those libraries with MARC-based on-line catalogues. Describes the components of a MARC record. To successfully retrieve the information sought from an on-line catalogue, the catalogue user must know whether it is in an indexed field and, if so, must be familiar with the search strategies required by the system.

Authors

Years

Languages

  • e 67
  • d 23
  • f 3
  • pl 1
  • sp 1
  • More… Less…

Types

  • a 82
  • el 7
  • s 5
  • m 4
  • b 2
  • n 2
  • x 1
  • More… Less…