Search (70 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Datenformate"
  1. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.01
    0.00831066 = product of:
      0.041553296 = sum of:
        0.026934259 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
          0.026934259 = score(doc=2845,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2884563 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.014619039 = product of:
          0.043857116 = sum of:
            0.043857116 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.043857116 = score(doc=2845,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  2. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.01
    0.0065225097 = product of:
      0.032612547 = sum of:
        0.023567477 = weight(_text_:web in 2837) [ClassicSimilarity], result of:
          0.023567477 = score(doc=2837,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
              0.027135205 = score(doc=2837,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 2837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2837)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  3. Miller, E.; Ogbuji, U.: Linked data design for the visible library (2015) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 2773) [ClassicSimilarity], result of:
          0.020200694 = score(doc=2773,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 2773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2773)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2773) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2773,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2773)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    In response to libraries' frustration over their rich resources being invisible on the web, Zepheira, at the request of the Library of Congress, created BIBFRAME, a bibliographic metadata framework for cataloging. The model replaces MARC records with linked data, promoting resource visibility through a rich network of links. In place of formal taxonomies, a small but extensible vocabulary streamlines metadata efforts. Rather than using a unique bibliographic record to describe one item, BIBFRAME draws on the Dublin Core and the Functional Requirements for Bibliographic Records (FRBR) to generate formalized descriptions of Work, Instance, Authority and Annotation as well as associations between items. Zepheira trains librarians to transform MARC records to BIBFRAME resources and adapt the vocabulary for specialized needs, while subject matter experts and technical experts manage content, site design and usability. With a different approach toward data modeling and metadata, previously invisible resources gain visibility through linking.
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.23-29
  4. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.01
    0.005590722 = product of:
      0.02795361 = sum of:
        0.020200694 = weight(_text_:web in 3841) [ClassicSimilarity], result of:
          0.020200694 = score(doc=3841,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
              0.023258746 = score(doc=3841,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 3841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3841)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
  5. Carini, P.; Shepherd, K.: ¬The MARC standard and encoded archival description (2004) 0.00
    0.0041536554 = product of:
      0.041536555 = sum of:
        0.041536555 = product of:
          0.06230483 = sum of:
            0.031293165 = weight(_text_:29 in 2830) [ClassicSimilarity], result of:
              0.031293165 = score(doc=2830,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
            0.031011663 = weight(_text_:22 in 2830) [ClassicSimilarity], result of:
              0.031011663 = score(doc=2830,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    9.12.2005 19:29:32
    Source
    Library hi tech. 22(2004) no.1, S.18-27
  6. Coyle, K.: Future considerations : the functional library systems record (2004) 0.00
    0.0041536554 = product of:
      0.041536555 = sum of:
        0.041536555 = product of:
          0.06230483 = sum of:
            0.031293165 = weight(_text_:29 in 562) [ClassicSimilarity], result of:
              0.031293165 = score(doc=562,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
            0.031011663 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.031011663 = score(doc=562,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    9.12.2005 19:21:29
    Source
    Library hi tech. 22(2004) no.2, S.166-174
  7. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.00
    0.0041234493 = product of:
      0.041234493 = sum of:
        0.041234493 = weight(_text_:web in 1467) [ClassicSimilarity], result of:
          0.041234493 = score(doc=1467,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4416067 = fieldWeight in 1467, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
      0.1 = coord(1/10)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  8. Holt, B.: Presentation of UNIMARC on the Web : new fields, including the one for electronic resources (1999) 0.00
    0.004040139 = product of:
      0.040401388 = sum of:
        0.040401388 = weight(_text_:web in 6020) [ClassicSimilarity], result of:
          0.040401388 = score(doc=6020,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 6020, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6020)
      0.1 = coord(1/10)
    
  9. Qin, J.: Representation and organization of information in the Web space : from MARC to XML (2000) 0.00
    0.004040139 = product of:
      0.040401388 = sum of:
        0.040401388 = weight(_text_:web in 3918) [ClassicSimilarity], result of:
          0.040401388 = score(doc=3918,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.43268442 = fieldWeight in 3918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3918)
      0.1 = coord(1/10)
    
  10. Michard, A.; Pham Dac, D.: Description of collections and encyclopedias on the Web using XML (1998) 0.00
    0.0038090795 = product of:
      0.038090795 = sum of:
        0.038090795 = weight(_text_:web in 3493) [ClassicSimilarity], result of:
          0.038090795 = score(doc=3493,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4079388 = fieldWeight in 3493, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3493)
      0.1 = coord(1/10)
    
    Abstract
    Cataloguing artworks relies on the availability of classification schemes, often represented by hierarchical thesauri. Comments on the limitations of current practices and tools and proposes a new approach for the cooperative production of multilingual and multicultural classification schemes exploiting some features of the oncoming Extensible Markup Language based Web
  11. Scholz, M.: Wie können Daten im Web mit JSON nachgenutzt werden? (2023) 0.00
    0.0038090795 = product of:
      0.038090795 = sum of:
        0.038090795 = weight(_text_:web in 5345) [ClassicSimilarity], result of:
          0.038090795 = score(doc=5345,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4079388 = fieldWeight in 5345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5345)
      0.1 = coord(1/10)
    
    Abstract
    Martin Scholz ist Informatiker an der Universitätsbibliothek Erlangen-Nürnberg. Als Leiter der dortigen Gruppe Digitale Entwicklung und Datenmanagement beschäftigt er sich viel mit Webtechniken und Datentransformation. Er setzt sich mit der aktuellen ABI-Techik-Frage auseinander: Wie können Daten im Web mit JSON nachgenutzt werden?
  12. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.00
    0.0037641774 = product of:
      0.037641775 = sum of:
        0.037641775 = weight(_text_:web in 133) [ClassicSimilarity], result of:
          0.037641775 = score(doc=133,freq=10.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.40312994 = fieldWeight in 133, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
      0.1 = coord(1/10)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: Vgl.: http://www.semantic-web-journal.net/content/supporting-multilingual-bibliographic-resource-discovery-functional-requirements-bibliograph http://www.semantic-web-journal.net/sites/default/files/swj145_2.pdf.
    Source
    Semantic Web journal. 3(2012) no.1, S.3-21
  13. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.00
    0.0033329446 = product of:
      0.033329446 = sum of:
        0.033329446 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.033329446 = score(doc=5896,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35694647 = fieldWeight in 5896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.1 = coord(1/10)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
  14. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.00
    0.0033329446 = product of:
      0.033329446 = sum of:
        0.033329446 = weight(_text_:web in 5423) [ClassicSimilarity], result of:
          0.033329446 = score(doc=5423,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.35694647 = fieldWeight in 5423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
      0.1 = coord(1/10)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  15. Horah, J.L.: from cards to the Web : ¬The evolution of a library database (1998) 0.00
    0.0028568096 = product of:
      0.028568096 = sum of:
        0.028568096 = weight(_text_:web in 4842) [ClassicSimilarity], result of:
          0.028568096 = score(doc=4842,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3059541 = fieldWeight in 4842, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4842)
      0.1 = coord(1/10)
    
    Abstract
    The Jack Brause Library at New York University (NYU) is a special library supporting the curriculum of NYU's Real Estate Institute. The Jack Brause Library (JBL) Real estate Periodical Index was established in 1990 and draws on the library's collection of over 140 real estate periodicals. Describes the conversion of the JBL Index from a 3x5 card index to an online resource. The database was originally created using Rbase for DOS but this quickly became obsolete and in 1993 was replaced with InMagic. In 1997 the JBL Index was made available on NYU's telnet catalogue, BobCat, and the Internet database catalogue, BobCatPlus. The transition of InMagic data to USMARC formatted records involved a 3-step process: data normalization; adding value; and data recording. The Index has been operational through telnet since May 1997 and installing it onto the Web became functional in Oct 1997
  16. Willner, E.: Preparing data for the Web with SGML/XML (1998) 0.00
    0.0026934259 = product of:
      0.026934259 = sum of:
        0.026934259 = weight(_text_:web in 2894) [ClassicSimilarity], result of:
          0.026934259 = score(doc=2894,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.2884563 = fieldWeight in 2894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2894)
      0.1 = coord(1/10)
    
  17. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.00
    0.0023806747 = product of:
      0.023806747 = sum of:
        0.023806747 = weight(_text_:web in 5172) [ClassicSimilarity], result of:
          0.023806747 = score(doc=5172,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 5172, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
      0.1 = coord(1/10)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
  18. Cantrall, D.: From MARC to Mosaic : progressing toward data interchangeability at the Oregon State Archives (1994) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 8470) [ClassicSimilarity], result of:
          0.023567477 = score(doc=8470,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 8470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8470)
      0.1 = coord(1/10)
    
    Abstract
    Explains the technology used by the Oregon State Archives to relaize the goal of data interchangeability given the prescribed nature of the MARC format. Describes an emergent model of learning and information delivery focusing on the example of World Wide Web, accessed most often by the software client Mosaic, which is the fastest growing segment of the Internet information highway. Also describes The Data Magician, a flexible program which allows for many combinations of input and output formats, and will read unconventional formats such as MARC communications format. Oregon State Archives, using Mosaic and The Data Magician, are consequently able to present valuable electronic information to a variety of users
  19. Galvão, R.M.: UNIMARC format relevance : maintenance or replacement? (2018) 0.00
    0.0023567479 = product of:
      0.023567477 = sum of:
        0.023567477 = weight(_text_:web in 5163) [ClassicSimilarity], result of:
          0.023567477 = score(doc=5163,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 5163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5163)
      0.1 = coord(1/10)
    
    Abstract
    This article presents an empirical study focused on a qualitative analysis of the UNIMARC format. An analysis of the structural quality of the data provided by the format is evaluated to determine its current suitability for meeting the requirements and trends in data architecture for the information network and the Semantic Web. Driven by a set of quality characteristics that identify weaknesses in the data schema that cannot be bridged by simply converting data to MARC XML or RDF/XML, we conclude that the UNIMARC format is not compliant with the current metadata schema desiderata and must be replaced.
  20. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.00
    0.0020674444 = product of:
      0.020674443 = sum of:
        0.020674443 = product of:
          0.062023327 = sum of:
            0.062023327 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.062023327 = score(doc=2840,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Library hi tech. 22(2004) no.1

Types

  • a 61
  • el 7
  • s 4
  • b 2
  • m 1
  • n 1
  • More… Less…