Search (150 results, page 1 of 8)

  • × theme_ss:"Metadaten"
  1. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.07
    0.070709884 = product of:
      0.18855968 = sum of:
        0.12141637 = weight(_text_:cooperative in 4826) [ClassicSimilarity], result of:
          0.12141637 = score(doc=4826,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.526254 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0625 = fieldNorm(doc=4826)
        0.04614248 = weight(_text_:work in 4826) [ClassicSimilarity], result of:
          0.04614248 = score(doc=4826,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=4826)
        0.021000832 = product of:
          0.042001665 = sum of:
            0.042001665 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.042001665 = score(doc=4826,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Date
    18. 1.2019 19:13:22
  2. Keßler, M.: KIM - Kompetenzzentrum Interoperable Metadaten : Gemeinsamer Workshop der Deutschen Nationalbibliothek und des Arbeitskreises Elektronisches Publizieren (AKEP) (2007) 0.04
    0.037881188 = product of:
      0.15152475 = sum of:
        0.13577414 = weight(_text_:hochschule in 2406) [ClassicSimilarity], result of:
          0.13577414 = score(doc=2406,freq=4.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.57313037 = fieldWeight in 2406, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=2406)
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 2406) [ClassicSimilarity], result of:
              0.031501245 = score(doc=2406,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 2406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2406)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Das Kompetenzzentrum Interoperable Metadaten (KIM) ist eine Informations- und Kommunikationsplattform für Metadatenanwender und -entwickler zur Verbesserung der Interoperabilität von Metadaten im deutschsprachigen Raum. KIM unterstützt und fördert die Erarbeitung von Metadatenstandards, die interoperable Gestaltung von Formaten und damit die optimale Nutzung von Metadaten in digitalen Informationsumgebungen mittels Lehrmaterialien, Schulungen und Beratungen. Das Kompetenzzentrum entsteht im Rahmen des von der Deutschen Forschungsgemeinschaft (DFG) geförderten Projekts KIM unter der Federführung der Niedersächsischen Staats- und Universitätsbibliothek Göttingen (SUB) in Zusammenarbeit mit der Deutschen Nationalbibliothek (DNB). Projektpartner sind in der Schweiz die Hochschule für Technik und Wirtschaft HTW Chur, die Eidgenössische Technische Hochschule (ETH) Zürich und in Österreich die Universität Wien. Aufgabe des Kompetenzzentrums ist es, die Interoperabilität von Metadaten zu verbessern. Interoperabilität ist die Fähigkeit zur Zusammenarbeit von heterogenen Systemen. Datenbestände unabhängiger Systeme können ausgetauscht oder zusammengeführt werden, um z. B. eine systemübergreifende Suche oder das Browsen zu ermöglichen. Daten werden zum Teil in sehr unterschiedlichen Datenbanksystemen gespeichert. Interoperabilität entsteht dann, wenn die Systeme umfangreiche Schnittstellen implementieren, die ein weitgehend verlustfreies Mapping der internen Datenrepräsentation ermöglichen.
    Source
    Dialog mit Bibliotheken. 20(2008) H.1, S.22-24
  3. Cox, R.J.: More than diplomatic : functional requirements for evidence in recordkeeping (1997) 0.04
    0.03637277 = product of:
      0.14549108 = sum of:
        0.10511641 = weight(_text_:supported in 621) [ClassicSimilarity], result of:
          0.10511641 = score(doc=621,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.45803228 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0546875 = fieldNorm(doc=621)
        0.04037467 = weight(_text_:work in 621) [ClassicSimilarity], result of:
          0.04037467 = score(doc=621,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 621, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=621)
      0.25 = coord(2/8)
    
    Abstract
    In the 1990s, North American archivists and records managers shifted some of their concern with electronic records and record keeping systems to conducting research about the nature of these records and systems. Describes a research project at Pittsburgh University School of Information Sciences, which considered how electronic records might be managed by the development of recorddkeeping functional requirements. The work was supported with funding from the National Historical Publications and Records Commission. Focuses on the project's 4 main products: the functional requirements; metadata specifications for recordkeeping; and the warrant reflecting professional and societal endorsement of the requirements
  4. Weiß, B.: Dublin Core : Metadaten als Verzeichnungsform elektronischer Publikationen (2000) 0.02
    0.024001703 = product of:
      0.19201362 = sum of:
        0.19201362 = weight(_text_:hochschule in 3777) [ClassicSimilarity], result of:
          0.19201362 = score(doc=3777,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.81052876 = fieldWeight in 3777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.09375 = fieldNorm(doc=3777)
      0.125 = coord(1/8)
    
    Source
    Wissenschaft online: Elektronisches Publizieren in Bibliothek und Hochschule. Hrsg. B. Tröger
  5. Banush, D.; Kurth, M:; Pajerek, J.: Rehabilitating killer serials : an automated strategy for maintaining E-journal metadata (2005) 0.02
    0.022252686 = product of:
      0.089010745 = sum of:
        0.07588523 = weight(_text_:cooperative in 124) [ClassicSimilarity], result of:
          0.07588523 = score(doc=124,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.32890874 = fieldWeight in 124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0390625 = fieldNorm(doc=124)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 124) [ClassicSimilarity], result of:
              0.02625104 = score(doc=124,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=124)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Cornell University Library (CUL) has developed a largely automated method for providing title-level catalog access to electronic journals made available through aggregator packages. CUL's technique for automated e-journal record creation and maintenance relies largely on the conversion of externally supplied metadata into streamlined, abbreviated-level MARC records. Unlike the Cooperative Online Serials Cataloging Program's recently implemented aggregator-neutral approach to e-journal cataloging, CUL's method involves the creation of a separate bibliographic record for each version of an e-journal title in order to facilitate automated record maintenance. An indexed local field indicates the aggregation to which each title belongs and enables machine manipulation of all the records associated with a specific aggregation. Information encoded in another locally defined field facilitates the identification of all of the library's e-journal titles and allows for the automatic generation of a Web-based title list of e-journals. CUL's approach to providing title-level catalog access to its e-journal aggregations involves a number of tradeoffs in which some elements of traditional bibliographic description (such as subject headings and linking fields) are sacrificed in the interest of timeliness and affordability. URLs (Uniform Resource Locators) and holdings information are updated on a regular basis by use of automated methods that save on staff costs.
    Date
    10. 9.2000 17:38:22
  6. Weibel, S.; Miller, E.: Cataloging syntax and public policy meet in PICS (1997) 0.02
    0.01501663 = product of:
      0.12013304 = sum of:
        0.12013304 = weight(_text_:supported in 1561) [ClassicSimilarity], result of:
          0.12013304 = score(doc=1561,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.52346545 = fieldWeight in 1561, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0625 = fieldNorm(doc=1561)
      0.125 = coord(1/8)
    
    Content
    The PICS, an initiative of W3C, is a technology that supports the association of descriptive labels with Web resources. By providing a single common transport syntax for metadata, PICS will support the growth of metadata systems (including library cataloguing) that are interoperable and widely supported in Web information systems. Within the PICS framework, a great diversity of resource description models can be implemented, from simple rating schemes to complex data content standards
  7. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.01
    0.0148367155 = product of:
      0.059346862 = sum of:
        0.040784575 = weight(_text_:work in 2652) [ClassicSimilarity], result of:
          0.040784575 = score(doc=2652,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28674924 = fieldWeight in 2652, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.018562287 = product of:
          0.037124574 = sum of:
            0.037124574 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.037124574 = score(doc=2652,freq=4.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.27358043 = fieldWeight in 2652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2652)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  8. Brasethvik, T.: ¬A semantic modeling approach to metadata (1998) 0.01
    0.0146876 = product of:
      0.0587504 = sum of:
        0.04037467 = weight(_text_:work in 5165) [ClassicSimilarity], result of:
          0.04037467 = score(doc=5165,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 5165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
        0.018375728 = product of:
          0.036751457 = sum of:
            0.036751457 = weight(_text_:22 in 5165) [ClassicSimilarity], result of:
              0.036751457 = score(doc=5165,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.2708308 = fieldWeight in 5165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5165)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    States that heterogeneous project groups today may be expected to use the mechanisms of the Web for sharing information. Metadata has been proposed as a mechanism for expressing the semantics of information and, hence, facilitate information retrieval, understanding and use. Presents an approach to sharing information which aims to use a semantic modeling language as the basis for expressing the semantics of information and designing metadata schemes. Functioning on the borderline between human and computer understandability, the modeling language would be able to express the semantics of published Web documents. Reporting on work in progress, presents the overall framework and ideas
    Date
    9. 9.2000 17:22:23
  9. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.01
    0.0146876 = product of:
      0.0587504 = sum of:
        0.04037467 = weight(_text_:work in 2606) [ClassicSimilarity], result of:
          0.04037467 = score(doc=2606,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.018375728 = product of:
          0.036751457 = sum of:
            0.036751457 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.036751457 = score(doc=2606,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  10. Boydston, J.M.K.; Leysen, J.M.: Observations on the catalogers' role in descriptive metadata creation in academic libraries (2006) 0.01
    0.0131395515 = product of:
      0.10511641 = sum of:
        0.10511641 = weight(_text_:supported in 232) [ClassicSimilarity], result of:
          0.10511641 = score(doc=232,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.45803228 = fieldWeight in 232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0546875 = fieldNorm(doc=232)
      0.125 = coord(1/8)
    
    Abstract
    This article examines the case for the participation of catalogers in the creation of descriptive metadata. Metadata creation is an extension of the catalogers' existing skills, abilities, and knowledge. As such, it should be encouraged and supported. However, issues in this process, such as cost, supply of catalogers, and the need for further training will also be examined. The authors use examples from the literature and their own experiences in descriptive metadata creation. Suggestions for future research on the topic are included.
  11. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.012589371 = product of:
      0.050357483 = sum of:
        0.034606863 = weight(_text_:work in 2623) [ClassicSimilarity], result of:
          0.034606863 = score(doc=2623,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.031501245 = score(doc=2623,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  12. Lange, H.R.; Winkler, B.J.: Taming the Internet : metadata, a work in progress (1997) 0.01
    0.01153562 = product of:
      0.09228496 = sum of:
        0.09228496 = weight(_text_:work in 4705) [ClassicSimilarity], result of:
          0.09228496 = score(doc=4705,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.6488395 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.125 = fieldNorm(doc=4705)
      0.125 = coord(1/8)
    
  13. Greenberg, J.: Understanding metadata and metadata scheme (2005) 0.01
    0.011262473 = product of:
      0.09009978 = sum of:
        0.09009978 = weight(_text_:supported in 5725) [ClassicSimilarity], result of:
          0.09009978 = score(doc=5725,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.3925991 = fieldWeight in 5725, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.046875 = fieldNorm(doc=5725)
      0.125 = coord(1/8)
    
    Abstract
    Although the development and implementation of metadata schemes over the last decade has been extensive, research examining the sum of these activities is limited. This limitation is likely due to the massive scope of the topic. A framework is needed to study the full extent of, and functionalities supported by, metadata schemes. Metadata schemes developed for information resources are analyzed. To begin, I present a review of the definition of metadata, metadata functions, and several metadata typologies. Next, a conceptualization for metadata schemes is presented. The emphasis is on semantic container-like metadata schemes (data structures). The last part of this paper introduces the MODAL (Metadata Objectives and principles, Domains, and Architectural Layout) framework as an approach for studying metadata schemes. The paper concludes with a brief discussion on value of frameworks for examining metadata schemes, including different types of metadata schemes.
  14. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.01
    0.010491143 = product of:
      0.041964572 = sum of:
        0.028839052 = weight(_text_:work in 3524) [ClassicSimilarity], result of:
          0.028839052 = score(doc=3524,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 3524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
              0.02625104 = score(doc=3524,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 3524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3524)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  15. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.01
    0.010472428 = product of:
      0.041889712 = sum of:
        0.030354092 = weight(_text_:cooperative in 339) [ClassicSimilarity], result of:
          0.030354092 = score(doc=339,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.1315635 = fieldWeight in 339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
        0.01153562 = weight(_text_:work in 339) [ClassicSimilarity], result of:
          0.01153562 = score(doc=339,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.081104934 = fieldWeight in 339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.015625 = fieldNorm(doc=339)
      0.25 = coord(2/8)
    
    Footnote
    Other selected specialized metadata element sets or schemas, such as Government Information Locator Service (GILS), are presented. Attention is brought to the different sets of elements and the need for linking up these elements across metadata schemes from a semantic point of view. It is no surprise, then, that after the presentation of additional specialized sets of metadata from the educational community and the arts sector, attention is turned to the discussion of Crosswalks between metadata element sets or the mapping of one metadata standard to another. Finally, the five appendices detailing elements found in Dublin Core, GILS, ARIADNE versions 3 and 3. 1, and Categories for the Description of Works of Art are an excellent addition to this chapter's focus on metadata and communities of practice. Chapters 3-6 provide an up-to-date account of the use of metadata standards in Libraries from the point of view of a community of practice. Some of the content standards included in these four chapters are AACR2, Dewey Decimal Classification (DDC), and Library of Congress Subject Classification. In addition, uses of MARC along with planned implementations of the archival community's encoding scheme, EAD, are covered in detail. In a way, content in these chapters can be considered as a refresher course on the history, current state, importance, and usefulness of the above-mentioned standards in Libraries. Application of the standards is offered for various types of materials, such as monographic materials, continuing resources, and integrating library metadata into local catalogs and databases. A review of current digital library projects takes place in Chapter 7. While details about these projects tend to become out of date fast, the sections on issues and problems encountered in digital projects and successes and failures deserve any reader's close inspection. A suggested model is important enough to merit a specific mention below, in a short list format, as it encapsulates lessons learned from issues, problems, successes, and failures in digital projects. Before detailing the model, however, the various projects included in Chapter 7 should be mentioned. The projects are: Colorado Digitization Project, Cooperative Online Resource Catalog (an Office of Research project by OCLC, Inc.), California Digital Library, JSTOR, LC's National Digital Library Program and VARIATIONS.
    Chapter 8 discusses issues of archiving and preserving digital materials. The chapter reiterates, "What is the point of all of this if the resources identified and catalogued are not preserved?" (Gorman, 2003, p. 16). Discussion about preservation and related issues is organized in five sections that successively ask why, what, who, how, and how much of the plethora of digital materials should be archived and preserved. These are not easy questions because of media instability and technological obsolescence. Stakeholders in communities with diverse interests compete in terms of which community or representative of a community has an authoritative say in what and how much get archived and preserved. In discussing the above-mentioned questions, the authors once again provide valuable information and lessons from a number of initiatives in Europe, Australia, and from other global initiatives. The Draft Charter on the Preservation of the Digital Heritage and the Guidelines for the Preservation of Digital Heritage, both published by UNESCO, are discussed and some of the preservation principles from the Guidelines are listed. The existing diversity in administrative arrangements for these new projects and resources notwithstanding, the impact on content produced for online reserves through work done in digital projects and from the use of metadata and the impact on levels of reference services and the ensuing need for different models to train users and staff is undeniable. In terms of education and training, formal coursework, continuing education, and informal and on-the-job training are just some of the available options. The intensity in resources required for cataloguing digital materials, the questions over the quality of digital resources, and the threat of the new digital environment to the survival of the traditional library are all issues quoted by critics and others, however, who are concerned about a balance for planning and resources allocated for traditional or print-based resources and newer digital resources. A number of questions are asked as part of the book's conclusions in Chapter 10. Of these questions, one that touches on all of the rest and upon much of the book's content is the question: What does the future hold for metadata in libraries? Metadata standards are alive and well in many communities of practice, as Chapters 2-6 have demonstrated. The usefulness of metadata continues to be high and innovation in various elements should keep information professionals engaged for decades to come. There is no doubt that metadata have had a tremendous impact in how we organize information for access and in terms of who, how, when, and where contact is made with library services and collections online. Planning and commitment to a diversity of metadata to serve the plethora of needs in communities of practice are paramount for the continued success of many digital projects and for online preservation of our digital heritage."
  16. Vorndran, A.; Grund, S.: Metadata sharing : how to transfer metadata information among work cluster members (2021) 0.01
    0.008651716 = product of:
      0.069213726 = sum of:
        0.069213726 = weight(_text_:work in 721) [ClassicSimilarity], result of:
          0.069213726 = score(doc=721,freq=8.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.4866296 = fieldWeight in 721, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=721)
      0.125 = coord(1/8)
    
    Abstract
    The German National Library (DNB) is using a clustering technique to aggregate works from the database Culturegraph. Culturegraph collects bibliographic metadata records from all German Regional Library Networks, the Austrian Library Network, and DNB. This stock of about 180 million records serves as the basis for work clustering-the attempt to assemble all manifestations of a work in one cluster. The results of this work clustering are not employed in the display of search results, as other similar approaches successfully do, but for transferring metadata elements among the cluster members. In this paper the transfer of content-descriptive metadata elements such as controlled and uncontrolled index terms and classifications and links to name records in the German Integrated Authority File (GND) are described. In this way, standardization and cross linking can be improved and the richness of metadata description can be enhanced.
  17. Rice, R.: Applying DC to institutional data repositories (2008) 0.01
    0.008392914 = product of:
      0.033571657 = sum of:
        0.02307124 = weight(_text_:work in 2664) [ClassicSimilarity], result of:
          0.02307124 = score(doc=2664,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 2664, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2664)
        0.010500416 = product of:
          0.021000832 = sum of:
            0.021000832 = weight(_text_:22 in 2664) [ClassicSimilarity], result of:
              0.021000832 = score(doc=2664,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.15476047 = fieldWeight in 2664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2664)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    DISC-UK DataShare (2007-2009), a project led by the University of Edinburgh and funded by JISC (Joint Information Systems Committee, UK), arises from an existing consortium of academic data support professionals working in the domain of social science datasets (Data Information Specialists Committee-UK). We are working together across four universities with colleagues engaged in managing open access repositories for e-prints. Our project supports 'early adopter' academics who wish to openly share datasets and presents a model for depositing 'orphaned datasets' that are not being deposited in subject-domain data archives/centres. Outputs from the project are intended to help to demystify data as complex objects in repositories, and assist other institutional repository managers in overcoming barriers to incorporating research data. By building on lessons learned from recent JISC-funded data repository projects such as SToRe and GRADE the project will help realize the vision of the Digital Repositories Roadmap, e.g. the milestone under Data, "Institutions need to invest in research data repositories" (Heery and Powell, 2006). Application of appropriate metadata is an important area of development for the project. Datasets are not different from other digital materials in that they need to be described, not just for discovery but also for preservation and re-use. The GRADE project found that for geo-spatial datasets, Dublin Core metadata (with geo-spatial enhancements such as a bounding box for the 'coverage' property) was sufficient for discovery within a DSpace repository, though more indepth metadata or documentation was required for re-use after downloading. The project partners are examining other metadata schemas such as the Data Documentation Initiative (DDI) versions 2 and 3, used primarily by social science data archives (Martinez, 2008). Crosswalks from the DDI to qualified Dublin Core are important for describing research datasets at the study level (as opposed to the variable level which is largely out of scope for this project). DataShare is benefiting from work of of the DRIADE project (application profile development for evolutionary biology) (Carrier, et al, 2007), eBank UK (developed an application profile for crystallography data) and GAP (Geospatial Application Profile, in progress) in defining interoperable Dublin Core qualified metadata elements and their application to datasets for each partner repository. The solution devised at Edinburgh for DSpace will be covered in the poster.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Gursoy, A.; Wickett, K.; Feinberg, M.: Understanding tag functions in a moderated, user-generated metadata ecosystem (2018) 0.01
    0.00806076 = product of:
      0.06448608 = sum of:
        0.06448608 = weight(_text_:work in 3946) [ClassicSimilarity], result of:
          0.06448608 = score(doc=3946,freq=10.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.45339036 = fieldWeight in 3946, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3946)
      0.125 = coord(1/8)
    
    Abstract
    Purpose The purpose of this paper is to investigate tag use in a metadata ecosystem that supports a fan work repository to identify functions of tags and explore the system as a co-constructed communicative context. Design/methodology/approach Using modified techniques from grounded theory (Charmaz, 2007), this paper integrates humanistic and social science methods to identify kinds of tag use in a rich setting. Findings Three primary roles of tags emerge out of detailed study of the metadata ecosystem: tags can identify elements in the fan work, tags can reflect on how those elements are used or adapted in the fan work, and finally, tags can express the fan author's sense of her role in the discursive context of the fan work repository. Attending to each of the tag roles shifts focus away from just what tags say to include how they say it. Practical implications Instead of building metadata systems designed solely for retrieval or description, this research suggests that it may be fruitful to build systems that recognize various metadata functions and allow for expressivity. This research also suggests that attending to metadata previously considered unusable in systems may reflect the participants' sense of the system and their role within it. Originality/value In addition to accommodating a wider range of tag functions, this research implies consideration of metadata ecosystems, where different kinds of tags do different things and work together to create a multifaceted artifact.
  19. Hert, C.A.; Denn, S.O.; Gillman, D.W.; Oh, J.S.; Pattuelli, M.C.; Hernandez, N.: Investigating and modeling metadata use to support information architecture development in the statistical knowledge network (2007) 0.01
    0.007492605 = product of:
      0.05994084 = sum of:
        0.05994084 = weight(_text_:work in 422) [ClassicSimilarity], result of:
          0.05994084 = score(doc=422,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.4214336 = fieldWeight in 422, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=422)
      0.125 = coord(1/8)
    
    Abstract
    Metadata and an appropriate metadata model are nontrivial components of information architecture conceptualization and implementation, particularly when disparate and dispersed systems are integrated. Metadata availability can enhance retrieval processes, improve information organization and navigation, and support management of digital objects. To support these activities efficiently, metadata need to be modeled appropriately for the tasks. The authors' work focuses on how to understand and model metadata requirements to support the work of end users of an integrative statistical knowledge network (SKN). They report on a series of user studies. These studies provide an understanding of metadata elements necessary for a variety of user-oriented tasks, related business rules associated with the use of these elements, and their relationship to other perspectives on metadata model development. This work demonstrates the importance of the user perspective in this type of design activity and provides a set of strategies by which the results of user studies can be systematically utilized to support that design.
  20. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.01
    0.007137301 = product of:
      0.057098407 = sum of:
        0.057098407 = weight(_text_:work in 3287) [ClassicSimilarity], result of:
          0.057098407 = score(doc=3287,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.40144894 = fieldWeight in 3287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3287)
      0.125 = coord(1/8)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet

Authors

Years

Languages

  • e 138
  • d 10
  • sp 1
  • More… Less…

Types

  • a 136
  • el 15
  • m 8
  • s 8
  • b 2
  • More… Less…