Search (173 results, page 1 of 9)

  • × theme_ss:"Metadaten"
  1. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.14
    0.14377543 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 2623) [ClassicSimilarity], result of:
            0.11554795 = score(doc=2623,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
        0.33333334 = coord(1/3)
      0.105259456 = sum of:
        0.05510216 = weight(_text_:work in 2623) [ClassicSimilarity], result of:
          0.05510216 = score(doc=2623,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.2433148 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.050157297 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
          0.050157297 = score(doc=2623,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.23214069 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Bueno-de-la-Fuente, G.; Hernández-Pérez, T.; Rodríguez-Mateos, D.; Méndez-Rodríguez, E.M.; Martín-Galán, B.: Study on the use of metadata for digital learning objects in University Institutional Repositories (MODERI) (2009) 0.12
    0.12189559 = sum of:
      0.09434451 = product of:
        0.28303352 = sum of:
          0.28303352 = weight(_text_:objects in 2981) [ClassicSimilarity], result of:
            0.28303352 = score(doc=2981,freq=12.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.86305994 = fieldWeight in 2981, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2981)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 2981) [ClassicSimilarity], result of:
            0.05510216 = score(doc=2981,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 2981, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2981)
        0.5 = coord(1/2)
    
    Abstract
    Metadata is a core issue for the creation of repositories. Different institutional repositories have chosen and use different metadata models, elements and values for describing the range of digital objects they store. Thus, this paper analyzes the current use of metadata describing those Learning Objects that some open higher educational institutions' repositories include in their collections. The goal of this work is to identify and analyze the different metadata models being used to describe educational features of those specific digital educational objects (such as audience, type of educational material, learning objectives, etc.). Also discussed is the concept and typology of Learning Objects (LO) through their use in University Repositories. We will also examine the usefulness of specifically describing those learning objects, setting them apart from other kind of documents included in the repository, mainly scholarly publications and research results of the Higher Education institution.
  3. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.11
    0.10606464 = sum of:
      0.07262644 = product of:
        0.21787931 = sum of:
          0.21787931 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
            0.21787931 = score(doc=767,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.6643839 = fieldWeight in 767, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.33333334 = coord(1/3)
      0.0334382 = product of:
        0.0668764 = sum of:
          0.0668764 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.0668764 = score(doc=767,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.30952093 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.5 = coord(1/2)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  4. Rice, R.: Applying DC to institutional data repositories (2008) 0.10
    0.09585029 = sum of:
      0.025677323 = product of:
        0.07703197 = sum of:
          0.07703197 = weight(_text_:objects in 2664) [ClassicSimilarity], result of:
            0.07703197 = score(doc=2664,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.23489517 = fieldWeight in 2664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.03125 = fieldNorm(doc=2664)
        0.33333334 = coord(1/3)
      0.070172966 = sum of:
        0.03673477 = weight(_text_:work in 2664) [ClassicSimilarity], result of:
          0.03673477 = score(doc=2664,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.16220987 = fieldWeight in 2664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2664)
        0.0334382 = weight(_text_:22 in 2664) [ClassicSimilarity], result of:
          0.0334382 = score(doc=2664,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.15476047 = fieldWeight in 2664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2664)
    
    Abstract
    DISC-UK DataShare (2007-2009), a project led by the University of Edinburgh and funded by JISC (Joint Information Systems Committee, UK), arises from an existing consortium of academic data support professionals working in the domain of social science datasets (Data Information Specialists Committee-UK). We are working together across four universities with colleagues engaged in managing open access repositories for e-prints. Our project supports 'early adopter' academics who wish to openly share datasets and presents a model for depositing 'orphaned datasets' that are not being deposited in subject-domain data archives/centres. Outputs from the project are intended to help to demystify data as complex objects in repositories, and assist other institutional repository managers in overcoming barriers to incorporating research data. By building on lessons learned from recent JISC-funded data repository projects such as SToRe and GRADE the project will help realize the vision of the Digital Repositories Roadmap, e.g. the milestone under Data, "Institutions need to invest in research data repositories" (Heery and Powell, 2006). Application of appropriate metadata is an important area of development for the project. Datasets are not different from other digital materials in that they need to be described, not just for discovery but also for preservation and re-use. The GRADE project found that for geo-spatial datasets, Dublin Core metadata (with geo-spatial enhancements such as a bounding box for the 'coverage' property) was sufficient for discovery within a DSpace repository, though more indepth metadata or documentation was required for re-use after downloading. The project partners are examining other metadata schemas such as the Data Documentation Initiative (DDI) versions 2 and 3, used primarily by social science data archives (Martinez, 2008). Crosswalks from the DDI to qualified Dublin Core are important for describing research datasets at the study level (as opposed to the variable level which is largely out of scope for this project). DataShare is benefiting from work of of the DRIADE project (application profile development for evolutionary biology) (Carrier, et al, 2007), eBank UK (developed an application profile for crystallography data) and GAP (Geospatial Application Profile, in progress) in defining interoperable Dublin Core qualified metadata elements and their application to datasets for each partner repository. The solution devised at Edinburgh for DSpace will be covered in the poster.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Hert, C.A.; Denn, S.O.; Gillman, D.W.; Oh, J.S.; Pattuelli, M.C.; Hernandez, N.: Investigating and modeling metadata use to support information architecture development in the statistical knowledge network (2007) 0.09
    0.08623585 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 422) [ClassicSimilarity], result of:
            0.11554795 = score(doc=422,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=422)
        0.33333334 = coord(1/3)
      0.047719866 = product of:
        0.09543973 = sum of:
          0.09543973 = weight(_text_:work in 422) [ClassicSimilarity], result of:
            0.09543973 = score(doc=422,freq=6.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.4214336 = fieldWeight in 422, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=422)
        0.5 = coord(1/2)
    
    Abstract
    Metadata and an appropriate metadata model are nontrivial components of information architecture conceptualization and implementation, particularly when disparate and dispersed systems are integrated. Metadata availability can enhance retrieval processes, improve information organization and navigation, and support management of digital objects. To support these activities efficiently, metadata need to be modeled appropriately for the tasks. The authors' work focuses on how to understand and model metadata requirements to support the work of end users of an integrative statistical knowledge network (SKN). They report on a series of user studies. These studies provide an understanding of metadata elements necessary for a variety of user-oriented tasks, related business rules associated with the use of these elements, and their relationship to other perspectives on metadata model development. This work demonstrates the importance of the user perspective in this type of design activity and provides a set of strategies by which the results of user studies can be systematically utilized to support that design.
  6. Understanding metadata (2004) 0.08
    0.084792845 = sum of:
      0.051354647 = product of:
        0.15406394 = sum of:
          0.15406394 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
            0.15406394 = score(doc=2686,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.46979034 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.33333334 = coord(1/3)
      0.0334382 = product of:
        0.0668764 = sum of:
          0.0668764 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.0668764 = score(doc=2686,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  7. METS: an overview & tutorial : Metadata Encoding & Transmission Standard (METS) (2001) 0.08
    0.08202091 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 1323) [ClassicSimilarity], result of:
            0.16340949 = score(doc=1323,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 1323, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=1323)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 1323) [ClassicSimilarity], result of:
            0.05510216 = score(doc=1323,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 1323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=1323)
        0.5 = coord(1/2)
    
    Abstract
    Maintaining a library of digital objects of necessaryy requires maintaining metadata about those objects. The metadata necessary for successful management and use of digital objeets is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. While a library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note that the book was produced using a Ryobi offset press. The Same cannot be said for a digital version of the saure book. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
  8. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.08
    0.07954848 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 4752) [ClassicSimilarity], result of:
            0.16340949 = score(doc=4752,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 4752, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=4752)
        0.33333334 = coord(1/3)
      0.025078649 = product of:
        0.050157297 = sum of:
          0.050157297 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
            0.050157297 = score(doc=4752,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 4752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4752)
        0.5 = coord(1/2)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  9. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.08
    0.07855227 = sum of:
      0.055593036 = product of:
        0.1667791 = sum of:
          0.1667791 = weight(_text_:objects in 3372) [ClassicSimilarity], result of:
            0.1667791 = score(doc=3372,freq=6.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.508563 = fieldWeight in 3372, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3372)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 3372) [ClassicSimilarity], result of:
            0.045918465 = score(doc=3372,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 3372, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3372)
        0.5 = coord(1/2)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  10. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.07
    0.07419373 = sum of:
      0.044935312 = product of:
        0.13480593 = sum of:
          0.13480593 = weight(_text_:objects in 3383) [ClassicSimilarity], result of:
            0.13480593 = score(doc=3383,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41106653 = fieldWeight in 3383, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3383)
        0.33333334 = coord(1/3)
      0.029258423 = product of:
        0.058516845 = sum of:
          0.058516845 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
            0.058516845 = score(doc=3383,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 3383, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3383)
        0.5 = coord(1/2)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
  11. Lubas, R.L.; Wolfe, R.H.W.; Fleischman, M.: Creating metadata practices for MIT's OpenCourseWare Project (2004) 0.07
    0.07419373 = sum of:
      0.044935312 = product of:
        0.13480593 = sum of:
          0.13480593 = weight(_text_:objects in 2843) [ClassicSimilarity], result of:
            0.13480593 = score(doc=2843,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41106653 = fieldWeight in 2843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2843)
        0.33333334 = coord(1/3)
      0.029258423 = product of:
        0.058516845 = sum of:
          0.058516845 = weight(_text_:22 in 2843) [ClassicSimilarity], result of:
            0.058516845 = score(doc=2843,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 2843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2843)
        0.5 = coord(1/2)
    
    Abstract
    The MIT libraries were called upon to recommend a metadata scheme for the resources contained in MIT's OpenCourseWare (OCW) project. The resources in OCW needed descriptive, structural, and technical metadata. The SCORM standard, which uses IEEE Learning Object Metadata for its descriptive standard, was selected for its focus on educational objects. However, it was clear that the Libraries would need to recommend how the standard would be applied and adapted to accommodate needs that were not addressed in the standard's specifications. The newly formed MIT Libraries Metadata Unit adapted established practices from AACR2 and MARC traditions when facing situations in which there were no precedents to follow.
    Source
    Library hi tech. 22(2004) no.2, S.138-143
  12. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.07
    0.070172966 = product of:
      0.14034593 = sum of:
        0.14034593 = sum of:
          0.07346954 = weight(_text_:work in 4826) [ClassicSimilarity], result of:
            0.07346954 = score(doc=4826,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.32441974 = fieldWeight in 4826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0625 = fieldNorm(doc=4826)
          0.0668764 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
            0.0668764 = score(doc=4826,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.30952093 = fieldWeight in 4826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4826)
      0.5 = coord(1/2)
    
    Date
    18. 1.2019 19:13:22
  13. Mi, X.M.; Pollock, B.M.: Metadata schema to facilitate linked data for 3D digital models of cultural heritage collections : a University of South Florida Libraries case study (2018) 0.07
    0.06606706 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 5171) [ClassicSimilarity], result of:
            0.11554795 = score(doc=5171,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 5171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=5171)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 5171) [ClassicSimilarity], result of:
            0.05510216 = score(doc=5171,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 5171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=5171)
        0.5 = coord(1/2)
    
    Abstract
    The University of South Florida Libraries house and provide access to a collection of cultural heritage and 3D digital models. In an effort to provide greater access to these collections, a linked data project has been implemented. A metadata schema for the 3D cultural heritage objects which uses linked data is an excellent way to share these collections with other repositories, thus gaining global exposure and access to these valuable resources. This article will share the process of building the 3D cultural heritage metadata model as well as an assessment of the model and recommendations for future linked data projects.
    Footnote
    Beitrag in einem Heft: 'Setting standards to work and live by: A memorial Festschrift for Valerie Bross'.
  14. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.06
    0.062024727 = product of:
      0.124049455 = sum of:
        0.124049455 = sum of:
          0.064938515 = weight(_text_:work in 2652) [ClassicSimilarity], result of:
            0.064938515 = score(doc=2652,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28674924 = fieldWeight in 2652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2652)
          0.05911094 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
            0.05911094 = score(doc=2652,freq=4.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.27358043 = fieldWeight in 2652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2652)
      0.5 = coord(1/2)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  15. Brasethvik, T.: ¬A semantic modeling approach to metadata (1998) 0.06
    0.06140135 = product of:
      0.1228027 = sum of:
        0.1228027 = sum of:
          0.06428585 = weight(_text_:work in 5165) [ClassicSimilarity], result of:
            0.06428585 = score(doc=5165,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28386727 = fieldWeight in 5165, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5165)
          0.058516845 = weight(_text_:22 in 5165) [ClassicSimilarity], result of:
            0.058516845 = score(doc=5165,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 5165, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5165)
      0.5 = coord(1/2)
    
    Abstract
    States that heterogeneous project groups today may be expected to use the mechanisms of the Web for sharing information. Metadata has been proposed as a mechanism for expressing the semantics of information and, hence, facilitate information retrieval, understanding and use. Presents an approach to sharing information which aims to use a semantic modeling language as the basis for expressing the semantics of information and designing metadata schemes. Functioning on the borderline between human and computer understandability, the modeling language would be able to express the semantics of published Web documents. Reporting on work in progress, presents the overall framework and ideas
    Date
    9. 9.2000 17:22:23
  16. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.06
    0.06140135 = product of:
      0.1228027 = sum of:
        0.1228027 = sum of:
          0.06428585 = weight(_text_:work in 2606) [ClassicSimilarity], result of:
            0.06428585 = score(doc=2606,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28386727 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
          0.058516845 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
            0.058516845 = score(doc=2606,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
      0.5 = coord(1/2)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  17. Hunter, J.: MetaNet - a metadata term thesaurus to enable semantic interoperability between metadata domains (2001) 0.06
    0.055055887 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 6471) [ClassicSimilarity], result of:
            0.09628996 = score(doc=6471,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 6471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6471)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 6471) [ClassicSimilarity], result of:
            0.045918465 = score(doc=6471,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 6471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6471)
        0.5 = coord(1/2)
    
    Abstract
    Metadata interoperability is a fundamental requirement for access to information within networked knowledge organization systems. The Harmony international digital library project [1] has developed a common underlying data model (the ABC model) to enable the scalable mapping of metadata descriptions across domains and media types. The ABC model [2] provides a set of basic building blocks for metadata modeling and recognizes the importance of 'events' to describe unambiguously metadata for objects with a complex history. To test and evaluate the interoperability capabilities of this model, we applied it to some real multimedia examples and analysed the results of mapping from the ABC model to various different metadata domains using XSLT [3]. This work revealed serious limitations in the ability of XSLT to support flexible dynamic semantic mapping. To overcome this, we developed MetaNet [4], a metadata term thesaurus which provides the additional semantic knowledge that is non-existent within declarative XML-encoded metadata descriptions. This paper describes MetaNet, its RDF Schema [5] representation and a hybrid mapping approach which combines the structural and syntactic mapping capabilities of XSLT with the semantic knowledge of MetaNet, to enable flexible and dynamic mapping among metadata standards.
  18. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 3524) [ClassicSimilarity], result of:
            0.045918465 = score(doc=3524,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
          0.04179775 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
            0.04179775 = score(doc=3524,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
      0.5 = coord(1/2)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  19. Cantara, L.: METS: the metadata encoding and transmission standard (2005) 0.04
    0.04306218 = product of:
      0.08612436 = sum of:
        0.08612436 = product of:
          0.25837308 = sum of:
            0.25837308 = weight(_text_:objects in 5727) [ClassicSimilarity], result of:
              0.25837308 = score(doc=5727,freq=10.0), product of:
                0.3279419 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.061700378 = queryNorm
                0.7878624 = fieldWeight in 5727, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5727)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The Metadata Encoding and Transmission Standard (METS) is a data communication standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML Schema Language of the World Wide Web Consortium. An initiative of the Digital Library Federation, METS is under development by an international editorial board and is maintained in the Network Development and MARC Standards Office of the Library of Congress. Designed in conformance with the Open Archival Information System (OAIS) Reference Model, a METS document encapsulates digital objects and metadata as Information Packages for transmitting and/or exchanging digital objects to and from digital repositories, disseminating digital objects via the Web, and archiving digital objects for long-term preservation and access. This paper presents an introduction to the METS standard and through illustrated examples, demonstrates how to build a METS document.
  20. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.04
    0.039276134 = sum of:
      0.027796518 = product of:
        0.08338955 = sum of:
          0.08338955 = weight(_text_:objects in 1216) [ClassicSimilarity], result of:
            0.08338955 = score(doc=1216,freq=6.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.2542815 = fieldWeight in 1216, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1216)
        0.33333334 = coord(1/3)
      0.011479616 = product of:
        0.022959232 = sum of:
          0.022959232 = weight(_text_:work in 1216) [ClassicSimilarity], result of:
            0.022959232 = score(doc=1216,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.10138117 = fieldWeight in 1216, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.01953125 = fieldNorm(doc=1216)
        0.5 = coord(1/2)
    
    Abstract
    Reality is messy. Individuals perceive or define objects differently. Objects may change over time, morphing into new versions of their former selves or into things altogether different. A book can give rise to a translation, derivation, or edition, and these resulting objects are related in complex ways to each other and to the people and contexts in which they were created or transformed. Providing a normalized view of such a messy reality is a precondition for managing information. From the first library catalogs, through Melvil Dewey's Decimal Classification system in the nineteenth century, to today's MARC encoding of AACR2 cataloging rules, libraries have epitomized the process of what David Levy calls "order making", whereby catalogers impose a veneer of regularity on the natural disorder of the artifacts they encounter. The pre-digital library within which the Catalog and its standards evolved was relatively self-contained and controlled. Creating and maintaining catalog records was, and still is, the task of professionals. Today's Web, in contrast, has brought together a diversity of information management communities, with a variety of order-making standards, into what Stuart Weibel has called the Internet Commons. The sheer scale of this context has motivated a search for new ways to describe and index information. Second-generation search engines such as Google can yield astonishingly good search results, while tools such as ResearchIndex for automatic citation indexing and techniques for inferring "Web communities" from constellations of hyperlinks promise even better methods for focusing queries on information from authoritative sources. Such "automated digital libraries," according to Bill Arms, promise to radically reduce the cost of managing information. Alongside the development of such automated methods, there is increasing interest in metadata as a means of imposing pre-defined order on Web content. While the size and changeability of the Web makes professional cataloging impractical, a minimal amount of information ordering, such as that represented by the Dublin Core (DC), may vastly improve the quality of an automatic index at low cost; indeed, recent work suggests that some types of simple description may be generated with little or no human intervention.

Years

Languages

  • e 161
  • d 9
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 156
  • el 19
  • m 8
  • s 7
  • b 2
  • x 2
  • More… Less…