Search (172 results, page 1 of 9)

  • × theme_ss:"Metadaten"
  1. Tallerås, C.; Dahl, J.H.B.; Pharo, N.: User conceptualizations of derivative relationships in the bibliographic universe (2018) 0.03
    0.027459566 = product of:
      0.096108474 = sum of:
        0.02770021 = weight(_text_:based in 4247) [ClassicSimilarity], result of:
          0.02770021 = score(doc=4247,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 4247, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4247)
        0.068408266 = weight(_text_:great in 4247) [ClassicSimilarity], result of:
          0.068408266 = score(doc=4247,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 4247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4247)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose Considerable effort is devoted to developing new models for organizing bibliographic metadata. However, such models have been repeatedly criticized for their lack of proper user testing. The purpose of this paper is to present a study on how non-experts in bibliographic systems map the bibliographic universe and, in particular, how they conceptualize relationships between independent but strongly related entities. Design/methodology/approach The study is based on an open concept-mapping task performed to externalize the conceptualizations of 98 novice students. The conceptualizations of the resulting concept maps are identified and analyzed statistically. Findings The study shows that the participants' conceptualizations have great variety, differing in detail and granularity. These conceptualizations can be categorized into two main groups according to derivative relationships: those that apply a single-entity model directly relating document entities and those (the majority) that apply a multi-entity model relating documents through a high-level collocating node. These high-level nodes seem to be most adequately interpreted either as superwork devices collocating documents belonging to the same bibliographic family or as devices collocating documents belonging to a shared fictional world. Originality/value The findings can guide the work to develop bibliographic standards. Based on the diversity of the conceptualizations, the findings also emphasize the need for more user testing of both conceptual models and the bibliographic end-user systems implementing those models.
  2. Niininen, S.; Nykyri, S.; Suominen, O.: ¬The future of metadata : open, linked, and multilingual - the YSO case (2017) 0.03
    0.02514151 = product of:
      0.087995276 = sum of:
        0.019587006 = weight(_text_:based in 3707) [ClassicSimilarity], result of:
          0.019587006 = score(doc=3707,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 3707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3707)
        0.068408266 = weight(_text_:great in 3707) [ClassicSimilarity], result of:
          0.068408266 = score(doc=3707,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 3707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3707)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose The purpose of this paper is threefold: to focus on the process of multilingual concept scheme construction and the challenges involved; to addresses concrete challenges faced in the construction process and especially those related to equivalence between terms and concepts; and to briefly outlines the translation strategies developed during the process of concept scheme construction. Design/methodology/approach The analysis is based on experience acquired during the establishment of the Finnish thesaurus and ontology service Finto as well as the trilingual General Finnish Ontology YSO, both of which are being maintained and further developed at the National Library of Finland. Findings Although uniform resource identifiers can be considered language-independent, they do not render concept schemes and their construction free of language-related challenges. The fundamental issue with all the challenges faced is how to maintain consistency and predictability when the nature of language requires each concept to be treated individually. The key to such challenges is to recognise the function of the vocabulary and the needs of its intended users. Social implications Open science increases the transparency of not only research products, but also metadata tools. Gaining a deeper understanding of the challenges involved in their construction is important for a great variety of users - e.g. indexers, vocabulary builders and information seekers. Today, multilingualism is an essential aspect at both the national and international information society level. Originality/value This paper draws on the practical challenges faced in concept scheme construction in a trilingual environment, with a focus on "concept scheme" as a translation and mapping unit.
  3. Weibel, S.; Miller, E.: Cataloging syntax and public policy meet in PICS (1997) 0.02
    0.015636176 = product of:
      0.10945322 = sum of:
        0.10945322 = weight(_text_:great in 1561) [ClassicSimilarity], result of:
          0.10945322 = score(doc=1561,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.49769527 = fieldWeight in 1561, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1561)
      0.14285715 = coord(1/7)
    
    Content
    The PICS, an initiative of W3C, is a technology that supports the association of descriptive labels with Web resources. By providing a single common transport syntax for metadata, PICS will support the growth of metadata systems (including library cataloguing) that are interoperable and widely supported in Web information systems. Within the PICS framework, a great diversity of resource description models can be implemented, from simple rating schemes to complex data content standards
  4. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.02
    0.015001667 = product of:
      0.052505832 = sum of:
        0.03133921 = weight(_text_:based in 1953) [ClassicSimilarity], result of:
          0.03133921 = score(doc=1953,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 1953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1953)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.042333245 = score(doc=1953,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Scientific repositories create a new environment for studying traditional information science issues. The interaction between indexing terms provided by users and controlled vocabularies continues to be an area of debate and study. This article reports and analyzes findings from a study that mapped the relationships between free text keywords and controlled vocabulary terms used in the sciences. Based on this study's findings recommendations are made about which vocabularies may be better to use in scientific data repositories.
    Date
    29. 5.2015 19:09:22
  5. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 4752) [ClassicSimilarity], result of:
          0.03324025 = score(doc=4752,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 4752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4752)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
              0.031749934 = score(doc=4752,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 4752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4752)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  6. Baker, T.: Dublin Core Application Profiles : current approaches (2010) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 3737) [ClassicSimilarity], result of:
          0.03324025 = score(doc=3737,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 3737, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=3737)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 3737) [ClassicSimilarity], result of:
              0.031749934 = score(doc=3737,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 3737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3737)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The Dublin Core Metadata Initiative currently defines a Dublin Core Application Profile as a set of specifications about the metadata design of a particular application or for a particular domain or community of users. The current approach to application profiles is summarized in the Singapore Framework for Application Profiles [SINGAPORE-FRAMEWORK] (see Figure 1). While the approach originally developed as a means of specifying customized applications based on the fifteen elements of the Dublin Core Element Set (e.g., Title, Date, Subject), it has evolved into a generic approach to creating metadata that meets specific local requirements while integrating coherently with other RDF-based metadata.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  7. White, M.: ¬The value of taxonomies, thesauri and metadata in enterprise search (2016) 0.01
    0.013820557 = product of:
      0.0967439 = sum of:
        0.0967439 = weight(_text_:great in 2964) [ClassicSimilarity], result of:
          0.0967439 = score(doc=2964,freq=4.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43990463 = fieldWeight in 2964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2964)
      0.14285715 = coord(1/7)
    
    Content
    Beitrag in einem Special issue: The Great Debate: "This House Believes that the Traditional Thesaurus has no Place in Modern Information Retrieval." [19 February 2015, 14:00-17:30 preceded by ISKO UK AGM and followed by networking, wine and nibbles; vgl.: http://www.iskouk.org/content/great-debate].
  8. Lam, V.-T.: Cataloging Internet resources : Why, what, how (2000) 0.01
    0.013681654 = product of:
      0.09577157 = sum of:
        0.09577157 = weight(_text_:great in 967) [ClassicSimilarity], result of:
          0.09577157 = score(doc=967,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.43548337 = fieldWeight in 967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=967)
      0.14285715 = coord(1/7)
    
    Abstract
    Internet resources have brought great excitement but also grave concerns to the library world, especially to the cataloging community. In spite of the various problematic aspects presented by Internet resources (poorly organized, lack of stability, variable quality), catalogers have decided that they are worth cataloging, in particular those meeting library selection criteria. This paper tries to trace the decade-long history of the library comrnunity's efforts in providing an effective way to catalog Internet resources. Basically, its olbjective is to answer the following questions: Why catalog? What to catalog? and, How to catalog. Some issues of cataloging electronic journals and developments of the Dublin Core Metadata system are also discussed.
  9. Liechti, O.; Sifer, M.J.; Ichikawa, T.: Structured graph format : XML metadata for describing Web site structure (1998) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 3597) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3597,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3597)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 3597) [ClassicSimilarity], result of:
              0.03704159 = score(doc=3597,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 3597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3597)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    To improve searching, filtering and processing of information on the Web, a common effort is made in the direction of metadata, defined as machine understandable information about Web resources or other things. In particular, the eXtensible Markup Language (XML) aims at providing a common syntax to emerging metadata formats. Proposes the Structured Graph Format (SGF) an XML compliant markup language based on structured graphs, for capturing Web sites' structure. Presents SGMapper, a client-site tool, which aims to facilitate navigation in large Web sites by generating highly interactive site maps using SGF metadata
    Date
    1. 8.1996 22:08:06
  10. Heery, R.: Information gateways : collaboration and content (2000) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 4866) [ClassicSimilarity], result of:
          0.02742181 = score(doc=4866,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.03704159 = score(doc=4866,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
  11. Jizba, L.; Hillmann, D.I.: Insights from Ithaca : an interview with Diane Hillmann on metadata, Dublin Core, the National Science Digital Library, and more (2004/05) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 637) [ClassicSimilarity], result of:
          0.02742181 = score(doc=637,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=637)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 637) [ClassicSimilarity], result of:
              0.03704159 = score(doc=637,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=637)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In an interview, Diane I. Hillmann, an expert in metadata for digital libraries and currently co-principal investigator for the National Science Digital Library Registry based at Cornell University, discusses her education and career, and provides overviews and insights on metadata initiatives, including standards and models such as the widely adopted Dublin Core schema. She shares her professional interests from the early part of her career with communications, cataloging, and database production services; highlights key issues; and provides ideas and resources for managing changes in metadata standards and digital projects.
    Date
    2.12.2007 19:35:22
  12. Sutton, S.A.: Conceptual design and deployment of a metadata framework for educational resources on the Internet (1999) 0.01
    0.011727133 = product of:
      0.08208992 = sum of:
        0.08208992 = weight(_text_:great in 4054) [ClassicSimilarity], result of:
          0.08208992 = score(doc=4054,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 4054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4054)
      0.14285715 = coord(1/7)
    
    Abstract
    The metadata framework described in this article stems from a growing concern of the U.S. Department of Education and its National Library of Education that teachers, students, and parents are encountering increasing difficulty in accessing educational resources on the Internet even as those resources are becoming more abundant. This concern is joined by the realization that as Internet matures as a publishing environment, the successful management of resource repositories will hinge to a great extent on the intelligent use of metadata. We first explicate the conceptual foundations for the Gateway to Educational Materials (GEM) framework including the adoption of the Dublin Core Element Set as its base referent, and the extension of that set to meet the needs of the domain. We then discuss the complex of decisions that must be made regarding selection of the units of description and the structuring of an information space. The article concludes with a discussion of metadata generation, the association of metadata to the objects described, and a general description of the GEM system architecture
  13. Borbinha, J.: Authority control in the world of metadata (2004) 0.01
    0.011727133 = product of:
      0.08208992 = sum of:
        0.08208992 = weight(_text_:great in 5666) [ClassicSimilarity], result of:
          0.08208992 = score(doc=5666,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 5666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5666)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper discusses the concept of "metadata" in the scope of the "digital library," two terms recently used in a great diversity of perspectives. It is not the intent to promote privilege of any particular view, but rather to help provide a better understanding of these multiple perspectives. The paper starts with a discussion of the concept of digital library, followed by an analysis of the concept of metadata. It continues with a discussion about the relationship of this concept with technology, services, and scenarios of application. The concluding remarks stress the three main arguments assumed for the relevance of the concept of metadata: the growing number of heterogeneous genres of information resources, the new emerging scenarios for interoperability, and issues related to the cost and complexity of current technology.
  14. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.01
    0.01125125 = product of:
      0.039379373 = sum of:
        0.023504408 = weight(_text_:based in 2556) [ClassicSimilarity], result of:
          0.023504408 = score(doc=2556,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.031749934 = score(doc=2556,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
  15. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.01125125 = product of:
      0.039379373 = sum of:
        0.023504408 = weight(_text_:based in 2623) [ClassicSimilarity], result of:
          0.023504408 = score(doc=2623,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.031749934 = score(doc=2623,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Banush, D.; Kurth, M:; Pajerek, J.: Rehabilitating killer serials : an automated strategy for maintaining E-journal metadata (2005) 0.01
    0.009376042 = product of:
      0.032816146 = sum of:
        0.019587006 = weight(_text_:based in 124) [ClassicSimilarity], result of:
          0.019587006 = score(doc=124,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=124)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 124) [ClassicSimilarity], result of:
              0.026458278 = score(doc=124,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=124)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Cornell University Library (CUL) has developed a largely automated method for providing title-level catalog access to electronic journals made available through aggregator packages. CUL's technique for automated e-journal record creation and maintenance relies largely on the conversion of externally supplied metadata into streamlined, abbreviated-level MARC records. Unlike the Cooperative Online Serials Cataloging Program's recently implemented aggregator-neutral approach to e-journal cataloging, CUL's method involves the creation of a separate bibliographic record for each version of an e-journal title in order to facilitate automated record maintenance. An indexed local field indicates the aggregation to which each title belongs and enables machine manipulation of all the records associated with a specific aggregation. Information encoded in another locally defined field facilitates the identification of all of the library's e-journal titles and allows for the automatic generation of a Web-based title list of e-journals. CUL's approach to providing title-level catalog access to its e-journal aggregations involves a number of tradeoffs in which some elements of traditional bibliographic description (such as subject headings and linking fields) are sacrificed in the interest of timeliness and affordability. URLs (Uniform Resource Locators) and holdings information are updated on a regular basis by use of automated methods that save on staff costs.
    Date
    10. 9.2000 17:38:22
  17. Toth, M.B.; Emery, D.: Applying DCMI elements to digital images and text in the Archimedes Palimpsest Program (2008) 0.01
    0.009376042 = product of:
      0.032816146 = sum of:
        0.019587006 = weight(_text_:based in 2651) [ClassicSimilarity], result of:
          0.019587006 = score(doc=2651,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 2651) [ClassicSimilarity], result of:
              0.026458278 = score(doc=2651,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The digitized version of the only extant copy of Archimedes' key mathematical and scientific works contains over 6,500 images and 130 pages of transcriptions. Metadata is essential for managing, integrating and accessing these digital resources in the Web 2.0 environment. The Dublin Core Metadata Element Set meets many of our needs. It offers the needed flexibility and applicability to a variety of data sets containing different texts and images in a dynamic technical environment. The program team has continued to refine its data dictionary and elements based on the Dublin Core standard and feedback from the Dublin Core community since the 2006 Dublin Core Conference. This presentation cites the application and utility of the DCMI Standards during the final phase of this decade-long program. Since the 2006 conference, the amount of data has grown tenfold with new imaging techniques. Use of the DCMI Standards for integration across digital images and transcriptions will allow the hosting and integration of this data set and other cultural works across service providers, libraries and cultural institutions.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Slavic, A.; Baiget, C.: Using Dublin Core in educational material : some practical considerations based on the EASEL experience (2001) 0.01
    0.007834803 = product of:
      0.05484362 = sum of:
        0.05484362 = weight(_text_:based in 1830) [ClassicSimilarity], result of:
          0.05484362 = score(doc=1830,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.46604872 = fieldWeight in 1830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.109375 = fieldNorm(doc=1830)
      0.14285715 = coord(1/7)
    
  19. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.01
    0.007818088 = product of:
      0.05472661 = sum of:
        0.05472661 = weight(_text_:great in 1076) [ClassicSimilarity], result of:
          0.05472661 = score(doc=1076,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.14285715 = coord(1/7)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
  20. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.0075008334 = product of:
      0.026252916 = sum of:
        0.015669605 = weight(_text_:based in 1236) [ClassicSimilarity], result of:
          0.015669605 = score(doc=1236,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.010583311 = product of:
          0.021166623 = sum of:
            0.021166623 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.021166623 = score(doc=1236,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22

Authors

Years

Languages

  • e 158
  • d 10
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 154
  • el 20
  • s 9
  • m 8
  • b 2
  • x 2
  • More… Less…