Search (111 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × year_i:[2000 TO 2010}
  1. Hsieh-Yee, I.: Cataloging and metatdata education in North American LIS programs (2004) 0.07
    0.07371121 = product of:
      0.22113362 = sum of:
        0.026132854 = weight(_text_:web in 138) [ClassicSimilarity], result of:
          0.026132854 = score(doc=138,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 138, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=138)
        0.19500077 = sum of:
          0.16491184 = weight(_text_:programs in 138) [ClassicSimilarity], result of:
            0.16491184 = score(doc=138,freq=8.0), product of:
              0.25748047 = queryWeight, product of:
                5.79699 = idf(docFreq=364, maxDocs=44218)
                0.044416238 = queryNorm
              0.6404829 = fieldWeight in 138, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.79699 = idf(docFreq=364, maxDocs=44218)
                0.0390625 = fieldNorm(doc=138)
          0.030088935 = weight(_text_:22 in 138) [ClassicSimilarity], result of:
            0.030088935 = score(doc=138,freq=2.0), product of:
              0.1555381 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044416238 = queryNorm
              0.19345059 = fieldWeight in 138, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=138)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents findings of a survey an the state of cataloging and metadata education. in ALA-accredited library and information science progranis in North America. The survey was conducted in response to Action Item 5.1 of the "Bibliographic Control of Web Resources: A Library of Congress Action Plan," which focuses an providing metadata education to new LIS professionals. The study found LIS programs increased their reliance an introductory courses to cover cataloging and metadata, but fewer programs than before had a cataloging course requirement. The knowledge of cataloging delivered in introductory courses was basic, and the coverage of metadata was limited to an overview. Cataloging courses showed similarity in coverage and practice and focused an print mater!als. Few cataloging educators provided exercises in metadata record creation using non-AACR standards. Advanced cataloging courses provided in-depth coverage of subject cataloging and the cataloging of nonbook resources, but offered very limited coverage of metadata. Few programs offered full courses an metadata, and even fewer offered advanced metadata courses. Metadata topics were well integrated into LIS curricula, but coverage of metadata courses varied from program to program, depending an the interests of instructors. Educators were forward-looking and agreed an the inclusion of specific knowledge and skills in metadata instruction. A series of actions were proposed to assist educators in providing students with competencies in cataloging and metadata.
    Date
    10. 9.2000 17:38:22
  2. Heery, R.: Information gateways : collaboration and content (2000) 0.06
    0.06254284 = product of:
      0.12508568 = sum of:
        0.067437425 = weight(_text_:wide in 4866) [ClassicSimilarity], result of:
          0.067437425 = score(doc=4866,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.036585998 = weight(_text_:web in 4866) [ClassicSimilarity], result of:
          0.036585998 = score(doc=4866,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.04212451 = score(doc=4866,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
  3. Chopey, M.: Planning and implementing a metadata-driven digital repository (2005) 0.05
    0.050269302 = product of:
      0.1508079 = sum of:
        0.10899534 = weight(_text_:wide in 5729) [ClassicSimilarity], result of:
          0.10899534 = score(doc=5729,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5538448 = fieldWeight in 5729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
        0.041812565 = weight(_text_:web in 5729) [ClassicSimilarity], result of:
          0.041812565 = score(doc=5729,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 5729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
      0.33333334 = coord(2/6)
    
    Abstract
    Metadata is used to organize and control a wide range of different types of information object collections, most of which are accessed via the World Wide Web. This chapter presents a brief introduction to the purpose of metadata and how it has developed, and an overview of the steps to be taken and the functional expertise required in planning for and implementing the creation, storage, and use of metadata for resource discovery in a local repository of information objects.
  4. Coleman, A.S.: From cataloging to metadata : Dublin Core records for the library catalog (2005) 0.04
    0.04360208 = product of:
      0.13080624 = sum of:
        0.067437425 = weight(_text_:wide in 5722) [ClassicSimilarity], result of:
          0.067437425 = score(doc=5722,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 5722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
        0.063368805 = weight(_text_:web in 5722) [ClassicSimilarity], result of:
          0.063368805 = score(doc=5722,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 5722, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
      0.33333334 = coord(2/6)
    
    Abstract
    The Dublin Core is an international standard for describing and cataloging all kinds of information resources: books, articles, videos, and World Wide Web (web) resources. Sixteen Dublin Core (DC) elements and the steps for cataloging web resources using these elements and minimal controlled values are discussed, general guidelines for metadata creation are highlighted, a worksheet is provided to create the DC metadata records for the library catalog, and sample resource descriptions in DC are included.
  5. Greenberg, J.: Metadata and the World Wide Web (2002) 0.04
    0.042185646 = product of:
      0.12655693 = sum of:
        0.06812209 = weight(_text_:wide in 4264) [ClassicSimilarity], result of:
          0.06812209 = score(doc=4264,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.34615302 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
        0.05843484 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
          0.05843484 = score(doc=4264,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 4264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
      0.33333334 = coord(2/6)
    
    Abstract
    Metadata is of paramount importance for persons, organizations, and endeavors of every dimension that are increasingly turning to the World Wide Web (hereafter referred to as the Web) as a chief conduit for accessing and disseminating information. This is evidenced by the development and implementation of metadata schemas supporting projects ranging from restricted corporate intranets, data warehouses, and consumer-oriented electronic commerce enterprises to freely accessible digital libraries, educational initiatives, virtual museums, and other public Web sites. Today's metadata activities are unprecedented because they extend beyond the traditional library environment in an effort to deal with the Web's exponential growth. This article considers metadata in today's Web environment. The article defines metadata, examines the relationship between metadata and cataloging, provides definitions for key metadata vocabulary terms, and explores the topic of metadata generation. Metadata is an extensive and expanding subject that is prevalent in many environments. For practical reasons, this article has elected to concentrate an the information resource domain, which is defined by electronic textual documents, graphical images, archival materials, museum artifacts, and other objects found in both digital and physical information centers (e.g., libraries, museums, record centers, and archives). To show the extent and larger application of metadata, several examples are also drawn from the data warehouse, electronic commerce, open source, and medical communities.
  6. Baker, T.: ¬A grammar of Dublin Core (2000) 0.04
    0.035738762 = product of:
      0.071477525 = sum of:
        0.03853567 = weight(_text_:wide in 1236) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1236,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.020906283 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.020906283 = score(doc=1236,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.024071148 = score(doc=1236,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  7. Craven, T.: Changes in metatag descriptions over time (2001) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 6601) [ClassicSimilarity], result of:
          0.067437425 = score(doc=6601,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 6601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6601)
        0.036585998 = weight(_text_:web in 6601) [ClassicSimilarity], result of:
          0.036585998 = score(doc=6601,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 6601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6601)
      0.33333334 = coord(2/6)
    
    Abstract
    Four sets of Web pages previously visited in the summer of 2000 were revisited one year later. Of 707 pages containing metatag descriptions in 2000, 586 retained descriptions in 2001, and, of 1,230 pages lacking descriptions in 2000, 101 had descriptions in 2001. Home pages appeared to both lose and change descriptions more than other pages, with about 19% of descriptions changed in the two sets where home pages predominated versus about 12% in the other two sets. About two-thirds of changes involved minor revisions, and changes fell into a wide variety of categories. Some implications for software to assist in description revision are discussed
  8. Metadata practices on the cutting edge (2004) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 2335) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2335,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
        0.036585998 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2335,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
      0.33333334 = coord(2/6)
    
    Abstract
    The PowerPoint presentations from this one-day workshop on emerging metadata practices are available at this web site. Topics include metadata quality, interoperability, linking metadata, metadata for image collections, RSS, MODS, METS, and MPEG-21. Contributors include representatives from OCLC, CrossRef, the Library of Congress, universities and the private sector. Given the wide range of presentations, if you're interested in metadata you can likely find something of interest here, but no single topic is explored in much depth, and you are sometimes left wondering what the speaker said about a particular slide if there are no accompanying notes.
  9. Cantara, L.: METS: the metadata encoding and transmission standard (2005) 0.03
    0.034050807 = product of:
      0.10215242 = sum of:
        0.057803504 = weight(_text_:wide in 5727) [ClassicSimilarity], result of:
          0.057803504 = score(doc=5727,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 5727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
        0.04434892 = weight(_text_:web in 5727) [ClassicSimilarity], result of:
          0.04434892 = score(doc=5727,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 5727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
      0.33333334 = coord(2/6)
    
    Abstract
    The Metadata Encoding and Transmission Standard (METS) is a data communication standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML Schema Language of the World Wide Web Consortium. An initiative of the Digital Library Federation, METS is under development by an international editorial board and is maintained in the Network Development and MARC Standards Office of the Library of Congress. Designed in conformance with the Open Archival Information System (OAIS) Reference Model, a METS document encapsulates digital objects and metadata as Information Packages for transmitting and/or exchanging digital objects to and from digital repositories, disseminating digital objects via the Web, and archiving digital objects for long-term preservation and access. This paper presents an introduction to the METS standard and through illustrated examples, demonstrates how to build a METS document.
  10. Weibel, S.L.: Dublin Core Metadata Initiative (DCMI) : a personal history (2009) 0.03
    0.034050807 = product of:
      0.10215242 = sum of:
        0.057803504 = weight(_text_:wide in 3772) [ClassicSimilarity], result of:
          0.057803504 = score(doc=3772,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 3772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3772)
        0.04434892 = weight(_text_:web in 3772) [ClassicSimilarity], result of:
          0.04434892 = score(doc=3772,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 3772, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3772)
      0.33333334 = coord(2/6)
    
    Abstract
    This entry is a personal remembrance of the emergence and evolution of the Dublin Core Metadata Initiative from its inception in a 1994 invitational workshop to its current state as an international open standards community. It describes the context of resource description in the early days of the World Wide Web, and discusses both social and technical engineering brought to bear on its development. Notable in this development is the international character of the workshop and conference series, and the diverse spectrum of expertise from many countries that contributed to the effort. The Dublin Core began as a consensus-driven community that elaborated a set of resource description principles that served a broad spectrum of users and applications. The result has been an architecture for metadata that informs most Web-based resource description efforts. Equally important, the Dublin Core has become the leading community of expertise, practice, and discovery that continues to explore the borders between the ideal and the practical in the description of digital information assets.
  11. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.03
    0.033674203 = product of:
      0.1010226 = sum of:
        0.08296924 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
          0.08296924 = score(doc=2556,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.57238775 = fieldWeight in 2556, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.03610672 = score(doc=2556,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Theme
    Semantic Web
  12. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.03
    0.032941855 = product of:
      0.09882557 = sum of:
        0.062718846 = weight(_text_:web in 6048) [ClassicSimilarity], result of:
          0.062718846 = score(doc=6048,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
        0.03610672 = product of:
          0.07221344 = sum of:
            0.07221344 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07221344 = score(doc=6048,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  13. Corby, O.; Dieng, R.; Hébért, C.: ¬A conceptual graph model for W3C resource description framework (2000) 0.03
    0.03253942 = product of:
      0.09761825 = sum of:
        0.05174041 = weight(_text_:web in 5086) [ClassicSimilarity], result of:
          0.05174041 = score(doc=5086,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 5086, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5086)
        0.04587784 = weight(_text_:computer in 5086) [ClassicSimilarity], result of:
          0.04587784 = score(doc=5086,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 5086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5086)
      0.33333334 = coord(2/6)
    
    Abstract
    With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web
    Series
    Lecture notes in computer science; vol.1867: Lecture notes on artificial intelligence
  14. Hill, J.S.: Analog people for digital dreams : staffing and educational considerations for cataloging and metadata professionals (2005) 0.03
    0.030011963 = product of:
      0.18007177 = sum of:
        0.18007177 = sum of:
          0.13192947 = weight(_text_:programs in 126) [ClassicSimilarity], result of:
            0.13192947 = score(doc=126,freq=2.0), product of:
              0.25748047 = queryWeight, product of:
                5.79699 = idf(docFreq=364, maxDocs=44218)
                0.044416238 = queryNorm
              0.5123863 = fieldWeight in 126, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.79699 = idf(docFreq=364, maxDocs=44218)
                0.0625 = fieldNorm(doc=126)
          0.048142295 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
            0.048142295 = score(doc=126,freq=2.0), product of:
              0.1555381 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044416238 = queryNorm
              0.30952093 = fieldWeight in 126, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=126)
      0.16666667 = coord(1/6)
    
    Abstract
    As libraries attempt to incorporate increasing amounts of electronic resources into their catalogs, utilizing a growing variety of metadata standards, library and information science programs are grappling with how to educate catalogers to meet these challenges. In this paper, an employer considers the characteristics and skills that catalogers will need and how they might acquire them.
    Date
    10. 9.2000 17:38:22
  15. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.03
    0.02579916 = product of:
      0.07739748 = sum of:
        0.034061044 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.034061044 = score(doc=3173,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.04333644 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.04333644 = score(doc=3173,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.33333334 = coord(2/6)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  16. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.03
    0.02528477 = product of:
      0.07585431 = sum of:
        0.041812565 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
          0.041812565 = score(doc=2845,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.034041744 = product of:
          0.06808349 = sum of:
            0.06808349 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.06808349 = score(doc=2845,freq=4.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  17. Howarth, L.C.: Designing a "Human Understandable" metalevel ontology for enhancing resource discovery in knowledge bases (2000) 0.02
    0.02476748 = product of:
      0.07430244 = sum of:
        0.04816959 = weight(_text_:wide in 114) [ClassicSimilarity], result of:
          0.04816959 = score(doc=114,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
        0.026132854 = weight(_text_:web in 114) [ClassicSimilarity], result of:
          0.026132854 = score(doc=114,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
      0.33333334 = coord(2/6)
    
    Abstract
    With the explosion of digitized resources accessible via networked information systems, and the corresponding proliferation of general purpose and domain-specific schemes, metadata have assumed a special prominence. While recent work emanating from the World Wide Web Consortium (W3C) has focused on the Resource Description Framework (RDF) to support the interoperability of metadata standards - thus converting metatags from diverse domains from merely "machine-readable" to "machine-understandable" - the next iteration, to "human-understandable," remains a challenge. This apparent gap provides a framework for three-phase research (Howarth, 1999) to develop a tool which will provide a "human-understandable" front-end search assist to any XML-compliant metadata scheme. Findings from phase one, the analyses and mapping of seven metadata schemes, identify the particular challenges of designing a common "namespace", populated with element tags which are appropriately descriptive, yet readily understood by a lay searcher, when there is little congruence within, and a high degree of variability across, the metadata schemes under study. Implications for the subsequent design and testing of both the proposed "metalevel ontology" (phase two), and the prototype search assist tool (phase three) are examined
  18. Martin, P.: Conventions and notations for knowledge representation and retrieval (2000) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 5070) [ClassicSimilarity], result of:
          0.031359423 = score(doc=5070,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
        0.039323866 = weight(_text_:computer in 5070) [ClassicSimilarity], result of:
          0.039323866 = score(doc=5070,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
      0.33333334 = coord(2/6)
    
    Abstract
    Much research has focused on the problem of knowledge accessibility, sharing and reuse. Specific languages (e.g. KIF, CG, RDF) and ontologies have been proposed. Common characteristics, conventions or ontological distinctions are beginning to emerge. Since knowledge providers (humans and software agents) must follow common conventions for the knowledge to be widely accessed and re-used, we propose lexical, structural, semantic and ontological conventions based on various knowledge representation projects and our own research. These are minimal conventions that can be followed by most and cover the most common knowledge representation cases. However, agreement and refinements are still required. We also show that a notation can be both readable and expressive by quickly presenting two new notations -- Formalized English (FE) and Frame-CG (FCG) - derived from the CG linear form [9] and Frame-Logics [4]. These notations support the above conventions, and are implemented in our Web-based knowledge representation and document indexation tool, WebKB¹ [7]
    Series
    Lecture notes in computer science; vol.1867: Lecture notes on artificial intelligence
  19. Lin, X.; Li, J.; Zhou, X.: Theme creation for digital collections (2008) 0.02
    0.022313368 = product of:
      0.0669401 = sum of:
        0.04587784 = weight(_text_:computer in 2635) [ClassicSimilarity], result of:
          0.04587784 = score(doc=2635,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 2635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2635)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
              0.04212451 = score(doc=2635,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 2635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2635)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents an approach for integrating multiple sources of semantics for the creating metadata. A new framework is proposed to define topics and themes with both manually and automatically generated terms. The automatically generated terms include: terms from a semantic analysis of the collections and terms from previous user's queries. An interface is developed to facilitate the creation and use of such topics and themes for metadata creation. The framework and the interface promote human-computer collaboration in metadata creation. Several principles underlying such approach are also discussed.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.02
    0.022179842 = product of:
      0.066539526 = sum of:
        0.045263432 = weight(_text_:web in 2652) [ClassicSimilarity], result of:
          0.045263432 = score(doc=2652,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 2652, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.02127609 = product of:
          0.04255218 = sum of:
            0.04255218 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.04255218 = score(doc=2652,freq=4.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.27358043 = fieldWeight in 2652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2652)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas

Types

  • a 94
  • el 17
  • m 5
  • s 5
  • b 2
  • r 1
  • x 1
  • More… Less…