Search (176 results, page 1 of 9)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × year_i:[2000 TO 2010}
  1. Heery, R.: Information gateways : collaboration and content (2000) 0.03
    0.027844608 = product of:
      0.09745612 = sum of:
        0.044992477 = weight(_text_:wide in 4866) [ClassicSimilarity], result of:
          0.044992477 = score(doc=4866,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.024409214 = weight(_text_:web in 4866) [ClassicSimilarity], result of:
          0.024409214 = score(doc=4866,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.01868631 = weight(_text_:information in 4866) [ClassicSimilarity], result of:
          0.01868631 = score(doc=4866,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3592092 = fieldWeight in 4866, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.028104367 = score(doc=4866,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
    Source
    Online information review. 24(2000) no.1, S.40-45
    Theme
    Information Gateway
  2. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.02
    0.024975223 = product of:
      0.08741328 = sum of:
        0.055354897 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
          0.055354897 = score(doc=2556,freq=14.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.57238775 = fieldWeight in 2556, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.0060537956 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=2556,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.01797477 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.01797477 = score(doc=2556,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.008029819 = product of:
          0.024089456 = sum of:
            0.024089456 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.024089456 = score(doc=2556,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Source
    Online information review. 27(2003) no.2, S.94-101
    Theme
    Semantic Web
  3. Chopey, M.: Planning and implementing a metadata-driven digital repository (2005) 0.02
    0.024006475 = product of:
      0.112030216 = sum of:
        0.07271883 = weight(_text_:wide in 5729) [ClassicSimilarity], result of:
          0.07271883 = score(doc=5729,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.5538448 = fieldWeight in 5729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
        0.027896244 = weight(_text_:web in 5729) [ClassicSimilarity], result of:
          0.027896244 = score(doc=5729,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2884563 = fieldWeight in 5729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
        0.011415146 = weight(_text_:information in 5729) [ClassicSimilarity], result of:
          0.011415146 = score(doc=5729,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 5729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
      0.21428572 = coord(3/14)
    
    Abstract
    Metadata is used to organize and control a wide range of different types of information object collections, most of which are accessed via the World Wide Web. This chapter presents a brief introduction to the purpose of metadata and how it has developed, and an overview of the steps to be taken and the functional expertise required in planning for and implementing the creation, storage, and use of metadata for resource discovery in a local repository of information objects.
  4. Greenberg, J.: Metadata and the World Wide Web (2002) 0.02
    0.020255381 = product of:
      0.09452511 = sum of:
        0.045449268 = weight(_text_:wide in 4264) [ClassicSimilarity], result of:
          0.045449268 = score(doc=4264,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.34615302 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
        0.038986187 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
          0.038986187 = score(doc=4264,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.40312994 = fieldWeight in 4264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
        0.010089659 = weight(_text_:information in 4264) [ClassicSimilarity], result of:
          0.010089659 = score(doc=4264,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 4264, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
      0.21428572 = coord(3/14)
    
    Abstract
    Metadata is of paramount importance for persons, organizations, and endeavors of every dimension that are increasingly turning to the World Wide Web (hereafter referred to as the Web) as a chief conduit for accessing and disseminating information. This is evidenced by the development and implementation of metadata schemas supporting projects ranging from restricted corporate intranets, data warehouses, and consumer-oriented electronic commerce enterprises to freely accessible digital libraries, educational initiatives, virtual museums, and other public Web sites. Today's metadata activities are unprecedented because they extend beyond the traditional library environment in an effort to deal with the Web's exponential growth. This article considers metadata in today's Web environment. The article defines metadata, examines the relationship between metadata and cataloging, provides definitions for key metadata vocabulary terms, and explores the topic of metadata generation. Metadata is an extensive and expanding subject that is prevalent in many environments. For practical reasons, this article has elected to concentrate an the information resource domain, which is defined by electronic textual documents, graphical images, archival materials, museum artifacts, and other objects found in both digital and physical information centers (e.g., libraries, museums, record centers, and archives). To show the extent and larger application of metadata, several examples are also drawn from the data warehouse, electronic commerce, open source, and medical communities.
    Source
    Encyclopedia of library and information science. Vol.72, [=Suppl.35]
  5. Coleman, A.S.: From cataloging to metadata : Dublin Core records for the library catalog (2005) 0.02
    0.020214265 = product of:
      0.09433324 = sum of:
        0.044992477 = weight(_text_:wide in 5722) [ClassicSimilarity], result of:
          0.044992477 = score(doc=5722,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 5722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
        0.042278 = weight(_text_:web in 5722) [ClassicSimilarity], result of:
          0.042278 = score(doc=5722,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43716836 = fieldWeight in 5722, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
        0.0070627616 = weight(_text_:information in 5722) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=5722,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 5722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
      0.21428572 = coord(3/14)
    
    Abstract
    The Dublin Core is an international standard for describing and cataloging all kinds of information resources: books, articles, videos, and World Wide Web (web) resources. Sixteen Dublin Core (DC) elements and the steps for cataloging web resources using these elements and minimal controlled values are discussed, general guidelines for metadata creation are highlighted, a worksheet is provided to create the DC metadata records for the library catalog, and sample resource descriptions in DC are included.
  6. Electronic cataloging : AACR2 and metadata for serials and monographs (2003) 0.02
    0.017480938 = product of:
      0.06118328 = sum of:
        0.022496238 = weight(_text_:wide in 3082) [ClassicSimilarity], result of:
          0.022496238 = score(doc=3082,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.171337 = fieldWeight in 3082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3082)
        0.021139 = weight(_text_:web in 3082) [ClassicSimilarity], result of:
          0.021139 = score(doc=3082,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21858418 = fieldWeight in 3082, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3082)
        0.0070627616 = weight(_text_:information in 3082) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=3082,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 3082, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3082)
        0.010485282 = weight(_text_:retrieval in 3082) [ClassicSimilarity], result of:
          0.010485282 = score(doc=3082,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.11697317 = fieldWeight in 3082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3082)
      0.2857143 = coord(4/14)
    
    Abstract
    Electronic Cataloging is the undertaking of three pioneers in library science: Sheila S. Intner, Sally C. Tseng, and Mary L. Larsgaard, who co-edited Maps and Related Cartographic Materials: Cataloging Classification, and Bibliographic Control (Haworth, 2000). With illustrations, references, additional reading lists, and case studies, this research tool offers you tips and strategies to make metadata work for you and your library. No one currently involved in information cataloging should be without this book! For a complete list of contents, visit our Web site at www.HaworthPress.com. Electronic Cataloging: AACR2 and Metadata for Serials and Monographs is a collection of papers about recent developments in metadata and its practical applications in cataloging. Acknowledged experts examine a wide variety of techniques for managing serials and monographs using standards and schemas like MARC, AACR2, ISSN, ISBD, and Dublin Core. From the broadest introduction of metadata usage to the revisions of AACR2 through 2000, this book offers vital analysis and strategy for achieving Universal Bibliographic Control. Electronic Cataloging is divided into three parts. The first is an introduction to metadata, what it is, and its relationship to the library in general. The second portion focuses in more an how metadata can be utilized by a library system and the possibilities in the near future. The third portion is very specific, dealing with individual standards of metadata and elements, such as AACR2 and MARC, as well as current policies and prospects for the future. Information covered in Electronic Cataloging includes: an overview of metadata and seriality and why it is important to the cataloging community Universal Bibliographic Control: what has succeeded so far in cataloging and how metadata will evolve the step-by-step process for creating an effective metadata repository for the community the inherent problems that accompany cataloging nonprint research materials, such as electronic serials and the Web metadata schemas and the use of controlled vocabularies and classification systems standards of metadata, including MARC, Dublin Core, RDF, and AACR2, with emphasis an the revisions and efforts made with AACR2 through 2000 an overview of the ISSN (International Serials Standard Number) and its relationships to current codes and metadata standards, including AACR2 and much more!
    Content
    Enthält die Beiträge: Editors' Introduction (Sheila S. Intner, Sally C. Tseng, and Mary Lynette Larsgaard) PART 1. Cataloging in an Electronic Age (Michael Gorman) Why Metadata? Why Me? Why Now? (Brian E. C. Schottlaender) PART 2. Developing a Metadata Strategy (Grace Agnew) Practical Issues in Applying Metadata Schemas and Controlled Vocabularies to Cultural Heritage Information (Murtha Baca) Digital Resources and Metadata Application in the Shanghai Library (Yuanliang Ma and Wei Liu) Struggling Toward Retrieval: Alternatives to Standard Operating Procedures Can Help Librarians and the Public (Sheila S. Intner) PART 3. AACR2 and Other Metadata Standards: The Way Forward (Ann Huthwaite) AACR2 and Metadata: Library Opportunities in the Global Semantic Web (Barbara B. Tillett) Seriality: What Have We Accomplished? What's Next? (Jean Hirons) MARC and Mark-Up (Erik Jul) ISSN: Dumb Number, Smart Solution (Regina Romano Reynolds) Index Reference Notes Included
    Imprint
    Binghampton, NY : Haworth Information Press
  7. Cantara, L.: METS: the metadata encoding and transmission standard (2005) 0.02
    0.01643888 = product of:
      0.07671477 = sum of:
        0.03856498 = weight(_text_:wide in 5727) [ClassicSimilarity], result of:
          0.03856498 = score(doc=5727,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 5727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
        0.029588435 = weight(_text_:web in 5727) [ClassicSimilarity], result of:
          0.029588435 = score(doc=5727,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3059541 = fieldWeight in 5727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
        0.00856136 = weight(_text_:information in 5727) [ClassicSimilarity], result of:
          0.00856136 = score(doc=5727,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 5727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
      0.21428572 = coord(3/14)
    
    Abstract
    The Metadata Encoding and Transmission Standard (METS) is a data communication standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML Schema Language of the World Wide Web Consortium. An initiative of the Digital Library Federation, METS is under development by an international editorial board and is maintained in the Network Development and MARC Standards Office of the Library of Congress. Designed in conformance with the Open Archival Information System (OAIS) Reference Model, a METS document encapsulates digital objects and metadata as Information Packages for transmitting and/or exchanging digital objects to and from digital repositories, disseminating digital objects via the Web, and archiving digital objects for long-term preservation and access. This paper presents an introduction to the METS standard and through illustrated examples, demonstrates how to build a METS document.
  8. Weibel, S.L.: Dublin Core Metadata Initiative (DCMI) : a personal history (2009) 0.02
    0.01643888 = product of:
      0.07671477 = sum of:
        0.03856498 = weight(_text_:wide in 3772) [ClassicSimilarity], result of:
          0.03856498 = score(doc=3772,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 3772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3772)
        0.029588435 = weight(_text_:web in 3772) [ClassicSimilarity], result of:
          0.029588435 = score(doc=3772,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3059541 = fieldWeight in 3772, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3772)
        0.00856136 = weight(_text_:information in 3772) [ClassicSimilarity], result of:
          0.00856136 = score(doc=3772,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 3772, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3772)
      0.21428572 = coord(3/14)
    
    Abstract
    This entry is a personal remembrance of the emergence and evolution of the Dublin Core Metadata Initiative from its inception in a 1994 invitational workshop to its current state as an international open standards community. It describes the context of resource description in the early days of the World Wide Web, and discusses both social and technical engineering brought to bear on its development. Notable in this development is the international character of the workshop and conference series, and the diverse spectrum of expertise from many countries that contributed to the effort. The Dublin Core began as a consensus-driven community that elaborated a set of resource description principles that served a broad spectrum of users and applications. The result has been an architecture for metadata that informs most Web-based resource description efforts. Equally important, the Dublin Core has become the leading community of expertise, practice, and discovery that continues to explore the borders between the ideal and the practical in the description of digital information assets.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  9. Metadata practices on the cutting edge (2004) 0.02
    0.01638524 = product of:
      0.07646445 = sum of:
        0.044992477 = weight(_text_:wide in 2335) [ClassicSimilarity], result of:
          0.044992477 = score(doc=2335,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.342674 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
        0.024409214 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.024409214 = score(doc=2335,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
        0.0070627616 = weight(_text_:information in 2335) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=2335,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
      0.21428572 = coord(3/14)
    
    Abstract
    The PowerPoint presentations from this one-day workshop on emerging metadata practices are available at this web site. Topics include metadata quality, interoperability, linking metadata, metadata for image collections, RSS, MODS, METS, and MPEG-21. Contributors include representatives from OCLC, CrossRef, the Library of Congress, universities and the private sector. Given the wide range of presentations, if you're interested in metadata you can likely find something of interest here, but no single topic is explored in much depth, and you are sometimes left wondering what the speaker said about a particular slide if there are no accompanying notes.
    Imprint
    Washington, DC : National Information Standards Organization
  10. Zhang, J.; Jastram, I.: ¬A study of the metadata creation behavior of different user groups on the Internet (2006) 0.02
    0.016174633 = product of:
      0.07548162 = sum of:
        0.042278 = weight(_text_:web in 982) [ClassicSimilarity], result of:
          0.042278 = score(doc=982,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43716836 = fieldWeight in 982, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.012233062 = weight(_text_:information in 982) [ClassicSimilarity], result of:
          0.012233062 = score(doc=982,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 982, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.020970564 = weight(_text_:retrieval in 982) [ClassicSimilarity], result of:
          0.020970564 = score(doc=982,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
      0.21428572 = coord(3/14)
    
    Abstract
    Metadata is designed to improve information organization and information retrieval effectiveness and efficiency on the Internet. The way web publishers respond to metadata and the way they use it when publishing their web pages, however, is still a mystery. The authors of this paper aim to solve this mystery by defining different professional publisher groups, examining the behaviors of these user groups, and identifying the characteristics of their metadata use. This study will enhance the current understanding of metadata application behavior and provide evidence useful to researchers, web publishers, and search engine designers.
    Source
    Information processing and management. 42(2006) no.4, S.1099-1122
  11. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.02
    0.015243667 = product of:
      0.053352833 = sum of:
        0.020709297 = weight(_text_:elektronische in 339) [ClassicSimilarity], result of:
          0.020709297 = score(doc=339,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.14778057 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.01560879 = weight(_text_:bibliothek in 339) [ClassicSimilarity], result of:
          0.01560879 = score(doc=339,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.12829782 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.00856136 = weight(_text_:information in 339) [ClassicSimilarity], result of:
          0.00856136 = score(doc=339,freq=36.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 339, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.008473387 = weight(_text_:retrieval in 339) [ClassicSimilarity], result of:
          0.008473387 = score(doc=339,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.09452859 = fieldWeight in 339, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
      0.2857143 = coord(4/14)
    
    Content
    What is metadata? - Metadata schemas & their relationships to particular communities - Library and information-related metadata schemas - Creating library metadata for monographic materials - Creating library metadata for continuing materials - Integrating library metadata into local cataloging and bibliographic - databases - Digital collections/digital libraries - Archiving & preserving digital materials - Impact of digital resources on library services - Future possibilities
    Footnote
    Rez. in: JASIST. 58(2007) no.6., S.909-910 (A.D. Petrou): "A division in metadata definitions for physical objects vs. those for digital resources offered in Chapter 1 is punctuated by the use of broader, more inclusive metadata definitions, such as data about data as well as with the inclusion of more specific metadata definitions intended for networked resources. Intertwined with the book's subject matter, which is to "distinguish traditional cataloguing from metadata activity" (5), the authors' chosen metadata definition is also detailed on page 5 as follows: Thus while granting the validity of the inclusive definition, we concentrate primarily on metadata as it is most commonly thought of both inside and outside of the library community, as "structured information used to find, access, use and manage information resources primarily in a digital environment." (International Encyclopedia of Information and Library Science, 2003) Metadata principles discussed by the authors include modularity, extensibility, refinement and multilingualism. The latter set is followed by seven misconceptions about metadata. Two types of metadata discussed are automatically generated indexes and manually created records. In terms of categories of metadata, the authors present three sets of them as follows: descriptive, structural, and administrative metadata. Chapter 2 focuses on metadata for communities of practice, and is a prelude to content in Chapter 3 where metadata applications, use, and development are presented from the perspective of libraries. Chapter 2 discusses the emergence and impact of metadata on organization and access of online resources from the perspective of communities for which such standards exist and for the need for mapping one standard to another. Discussion focuses on metalanguages, such as Standard Generalized Markup Language (SGML) and eXtensible Markup Language (XML), "capable of embedding descriptive elements within the document markup itself' (25). This discussion falls under syntactic interoperability. For semantic interoperability, HTML and other mark-up languages, such as Text Encoding Initiative (TEI) and Computer Interchange of Museum Information (CIMI), are covered. For structural interoperability, Dublin Core's 15 metadata elements are grouped into three areas: content (title, subject, description, type, source, relation, and coverage), intellectual property (creator, publisher, contributor and rights), and instantiation (date, format, identifier, and language) for discussion.
    Other selected specialized metadata element sets or schemas, such as Government Information Locator Service (GILS), are presented. Attention is brought to the different sets of elements and the need for linking up these elements across metadata schemes from a semantic point of view. It is no surprise, then, that after the presentation of additional specialized sets of metadata from the educational community and the arts sector, attention is turned to the discussion of Crosswalks between metadata element sets or the mapping of one metadata standard to another. Finally, the five appendices detailing elements found in Dublin Core, GILS, ARIADNE versions 3 and 3. 1, and Categories for the Description of Works of Art are an excellent addition to this chapter's focus on metadata and communities of practice. Chapters 3-6 provide an up-to-date account of the use of metadata standards in Libraries from the point of view of a community of practice. Some of the content standards included in these four chapters are AACR2, Dewey Decimal Classification (DDC), and Library of Congress Subject Classification. In addition, uses of MARC along with planned implementations of the archival community's encoding scheme, EAD, are covered in detail. In a way, content in these chapters can be considered as a refresher course on the history, current state, importance, and usefulness of the above-mentioned standards in Libraries. Application of the standards is offered for various types of materials, such as monographic materials, continuing resources, and integrating library metadata into local catalogs and databases. A review of current digital library projects takes place in Chapter 7. While details about these projects tend to become out of date fast, the sections on issues and problems encountered in digital projects and successes and failures deserve any reader's close inspection. A suggested model is important enough to merit a specific mention below, in a short list format, as it encapsulates lessons learned from issues, problems, successes, and failures in digital projects. Before detailing the model, however, the various projects included in Chapter 7 should be mentioned. The projects are: Colorado Digitization Project, Cooperative Online Resource Catalog (an Office of Research project by OCLC, Inc.), California Digital Library, JSTOR, LC's National Digital Library Program and VARIATIONS.
    Chapter 8 discusses issues of archiving and preserving digital materials. The chapter reiterates, "What is the point of all of this if the resources identified and catalogued are not preserved?" (Gorman, 2003, p. 16). Discussion about preservation and related issues is organized in five sections that successively ask why, what, who, how, and how much of the plethora of digital materials should be archived and preserved. These are not easy questions because of media instability and technological obsolescence. Stakeholders in communities with diverse interests compete in terms of which community or representative of a community has an authoritative say in what and how much get archived and preserved. In discussing the above-mentioned questions, the authors once again provide valuable information and lessons from a number of initiatives in Europe, Australia, and from other global initiatives. The Draft Charter on the Preservation of the Digital Heritage and the Guidelines for the Preservation of Digital Heritage, both published by UNESCO, are discussed and some of the preservation principles from the Guidelines are listed. The existing diversity in administrative arrangements for these new projects and resources notwithstanding, the impact on content produced for online reserves through work done in digital projects and from the use of metadata and the impact on levels of reference services and the ensuing need for different models to train users and staff is undeniable. In terms of education and training, formal coursework, continuing education, and informal and on-the-job training are just some of the available options. The intensity in resources required for cataloguing digital materials, the questions over the quality of digital resources, and the threat of the new digital environment to the survival of the traditional library are all issues quoted by critics and others, however, who are concerned about a balance for planning and resources allocated for traditional or print-based resources and newer digital resources. A number of questions are asked as part of the book's conclusions in Chapter 10. Of these questions, one that touches on all of the rest and upon much of the book's content is the question: What does the future hold for metadata in libraries? Metadata standards are alive and well in many communities of practice, as Chapters 2-6 have demonstrated. The usefulness of metadata continues to be high and innovation in various elements should keep information professionals engaged for decades to come. There is no doubt that metadata have had a tremendous impact in how we organize information for access and in terms of who, how, when, and where contact is made with library services and collections online. Planning and commitment to a diversity of metadata to serve the plethora of needs in communities of practice are paramount for the continued success of many digital projects and for online preservation of our digital heritage."
    LCSH
    Information organization
    Cataloging of electronic information resources
    Information storage and retrieval systems
    Electronic information resources / Management
    RSWK
    Bibliothek / Elektronische Publikation / Metadaten
    Series
    Library and information science text series
    Subject
    Bibliothek / Elektronische Publikation / Metadaten
    Information organization
    Cataloging of electronic information resources
    Information storage and retrieval systems
    Electronic information resources / Management
  12. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.014491113 = product of:
      0.050718892 = sum of:
        0.025709987 = weight(_text_:wide in 1236) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1236,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.013948122 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.013948122 = score(doc=1236,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.005707573 = weight(_text_:information in 1236) [ClassicSimilarity], result of:
          0.005707573 = score(doc=1236,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.10971737 = fieldWeight in 1236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.016059639 = score(doc=1236,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  13. Nichols, D.M.; Paynter, G.W.; Chan, C.-H.; Bainbridge, D.; McKay, D.; Twidale, M.B.; Blandford, A.: Experiences in deploying metadata analysis tools for institutional repositories (2009) 0.01
    0.0142095145 = product of:
      0.06631107 = sum of:
        0.02465703 = weight(_text_:web in 2986) [ClassicSimilarity], result of:
          0.02465703 = score(doc=2986,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 2986, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2986)
        0.03660921 = weight(_text_:elektronische in 2986) [ClassicSimilarity], result of:
          0.03660921 = score(doc=2986,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.2612416 = fieldWeight in 2986, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2986)
        0.0050448296 = weight(_text_:information in 2986) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2986,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2986, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2986)
      0.21428572 = coord(3/14)
    
    Abstract
    Current institutional repository software provides few tools to help metadata librarians understand and analyse their collections. In this paper, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH, they were developed in response to feedback from repository administrators, and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool while the other is part of a wider access tool; one gives a holistic view of the metadata while the other looks for specific problems; one seeks patterns in the data values while the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching web browser views from the analysis tool to the repository interface, and back. We summarise the findings from both tools' deployment into a checklist of requirements for metadata analysis tools.
    Form
    Elektronische Dokumente
  14. Korb, N.; Wollschläger, T.: Koordinierungsstelle DissOnline auf dem 2. Bibliothekskongress in Leipzig : Strategien zur Lösung von technischen und Rechtsfragen bei Online-Hochschulschriften (2004) 0.01
    0.012965348 = product of:
      0.09075743 = sum of:
        0.043931052 = weight(_text_:elektronische in 2385) [ClassicSimilarity], result of:
          0.043931052 = score(doc=2385,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3134899 = fieldWeight in 2385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.046875 = fieldNorm(doc=2385)
        0.046826374 = weight(_text_:bibliothek in 2385) [ClassicSimilarity], result of:
          0.046826374 = score(doc=2385,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.38489348 = fieldWeight in 2385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=2385)
      0.14285715 = coord(2/14)
    
    Abstract
    Zur Unterstützung von Autoren, Bibliotheken, Verlagen und weiteren Institutionen bei der Publikation von elektronischen Hochschulschriften sowie zur Förderung ihrer Verbreitung und Nutzung wurde 2001 auf Empfehlung des Projektes der Deutschen Forschungsgemeinschaft (DFG) »Dissertationen Online« die Koordinierungsstelle DissOnline an Der Deutschen Bibliothek eingerichtet. Die Koordinierungsstelle hat sich inzwischen in Deutschland etabliert. Seit ihrer Gründung 2001 führte die Koordinierungsstelle auf jedem Bibliothekartag eine Veranstaltung durch. Auf dem diesjährigen 2. Bibliothekskongress in Leipzig wurde in einer Einführung von Dr. Thomas Wollschläger (die Deutsche Bibliothek Frankfurt am Main) über die aktuelle Arbeit der Koordinierungsstelle berichtet. Es wurden neue Entwicklungen bei der Informationsvermittlung mittels DissOnline vorgestellt und es konnte sowohl eine wachsende Nutzung der Möglichkeit zur OnlinePublikation als auch ein verstärkter Zugriff - auf Online-Hochschulschriften selbst verzeichnet werden. Deutlich wurden dabei auch die Vorteile der Metadaten für eine effektive Nutzung der Online-Veröffentlichungen.
    Form
    Elektronische Dokumente
  15. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.01
    0.012886505 = product of:
      0.060137022 = sum of:
        0.03416578 = weight(_text_:web in 2731) [ClassicSimilarity], result of:
          0.03416578 = score(doc=2731,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35328537 = fieldWeight in 2731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.009024465 = weight(_text_:information in 2731) [ClassicSimilarity], result of:
          0.009024465 = score(doc=2731,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1734784 = fieldWeight in 2731, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.016946774 = weight(_text_:retrieval in 2731) [ClassicSimilarity], result of:
          0.016946774 = score(doc=2731,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.18905719 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
      0.21428572 = coord(3/14)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  16. Heidorn, P.B.; Wei, Q.: Automatic metadata extraction from museum specimen labels (2008) 0.01
    0.012614421 = product of:
      0.04415047 = sum of:
        0.017435152 = weight(_text_:web in 2624) [ClassicSimilarity], result of:
          0.017435152 = score(doc=2624,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
        0.0050448296 = weight(_text_:information in 2624) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2624,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
        0.014978974 = weight(_text_:retrieval in 2624) [ClassicSimilarity], result of:
          0.014978974 = score(doc=2624,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 2624) [ClassicSimilarity], result of:
              0.020074548 = score(doc=2624,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 2624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2624)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    This paper describes the information properties of museum specimen labels and machine learning tools to automatically extract Darwin Core (DwC) and other metadata from these labels processed through Optical Character Recognition (OCR). The DwC is a metadata profile describing the core set of access points for search and retrieval of natural history collections and observation databases. Using the HERBIS Learning System (HLS) we extract 74 independent elements from these labels. The automated text extraction tools are provided as a web service so that users can reference digital images of specimens and receive back an extended Darwin Core XML representation of the content of the label. This automated extraction task is made more difficult by the high variability of museum label formats, OCR errors and the open class nature of some elements. In this paper we introduce our overall system architecture, and variability robust solutions including, the application of Hidden Markov and Naïve Bayes machine learning models, data cleaning, use of field element identifiers, and specialist learning models. The techniques developed here could be adapted to any metadata extraction situation with noisy text and weakly ordered elements.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  17. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.012552989 = product of:
      0.058580615 = sum of:
        0.013980643 = weight(_text_:information in 4869) [ClassicSimilarity], result of:
          0.013980643 = score(doc=4869,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2687516 = fieldWeight in 4869, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
        0.033893548 = weight(_text_:retrieval in 4869) [ClassicSimilarity], result of:
          0.033893548 = score(doc=4869,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 4869, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
        0.010706427 = product of:
          0.032119278 = sum of:
            0.032119278 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.032119278 = score(doc=4869,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
    Source
    Online information review. 24(2000) no.1, S.46-48
    Theme
    Information Gateway
    Klassifikationssysteme im Online-Retrieval
  18. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.01
    0.011829601 = product of:
      0.0552048 = sum of:
        0.022724634 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.022724634 = score(doc=3173,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.028912932 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.028912932 = score(doc=3173,freq=22.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.0035672332 = weight(_text_:information in 3173) [ClassicSimilarity], result of:
          0.0035672332 = score(doc=3173,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.068573356 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.21428572 = coord(3/14)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  19. Suleman, H.; Fox, E.A.: Leveraging OAI harvesting to disseminate theses (2003) 0.01
    0.011785148 = product of:
      0.08249603 = sum of:
        0.03856498 = weight(_text_:wide in 4779) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4779,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4779)
        0.043931052 = weight(_text_:elektronische in 4779) [ClassicSimilarity], result of:
          0.043931052 = score(doc=4779,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3134899 = fieldWeight in 4779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.046875 = fieldNorm(doc=4779)
      0.14285715 = coord(2/14)
    
    Abstract
    NDLTD, the Networked Digital Library of Theses and Dissertations, supports and encourages the production and archiving of electronic theses and dissertations (ETDs). While many current NDLTD member institutions and consortia have individual collections accessible online, there has until recently been no single mechanism to aggregate all ETDs to provide NDLTD-wide services (e.g. searching). With the emergence of the Open Archives Initiative (OAI), that has changed. The OAI's Protocol for Metadata Harvesting is a robust interoperability solution that defines a standard method of exchanging metadata. While working with the OAI to develop and test the metadata harvesting standard, we have set up and actively maintain a central NDLTD metadata collection and multiple user portals. We discuss in this article our experiences in building this distributed digital library based upon the work of the OAI.
    Form
    Elektronische Dokumente
  20. Chapman, J.W.; Reynolds, D.; Shreeves, S.A.: Repository metadata : approaches and challenges (2009) 0.01
    0.011785148 = product of:
      0.08249603 = sum of:
        0.03856498 = weight(_text_:wide in 2980) [ClassicSimilarity], result of:
          0.03856498 = score(doc=2980,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 2980, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2980)
        0.043931052 = weight(_text_:elektronische in 2980) [ClassicSimilarity], result of:
          0.043931052 = score(doc=2980,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.3134899 = fieldWeight in 2980, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.046875 = fieldNorm(doc=2980)
      0.14285715 = coord(2/14)
    
    Abstract
    Many institutional repositories have pursued a mixed metadata environment, relying on description by multiple workflows. Strategies may include metadata converted from other systems, metadata elicited from the document creator or manager, and metadata created by library or repository staff. Additional editing or proofing may or may not occur. The mixed environment brings challenges of creation, management, and access. In this paper, repository efforts at three major universities are discussed. All three repositories run on the DSpace software package, and the opportunities and limitations of that system will be examined. The authors discuss local strategies in light of current thinking on metadata creation, user behavior, and the aggregation of heterogeneous metadata. The contrasts between the mission of each repository effort will show the importance of local customization, while the experience of all three institutions forms the basis for recommendations on strategies of benefit to a wide range of librarians and repository planners.
    Form
    Elektronische Dokumente

Authors

Types

  • a 152
  • el 27
  • m 7
  • s 6
  • b 2
  • n 2
  • x 1
  • More… Less…