Search (195 results, page 1 of 10)

  • × theme_ss:"Metadaten"
  1. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.07
    0.072361186 = product of:
      0.14472237 = sum of:
        0.11292135 = weight(_text_:term in 3280) [ClassicSimilarity], result of:
          0.11292135 = score(doc=3280,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5155283 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.031801023 = product of:
          0.063602045 = sum of:
            0.063602045 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.063602045 = score(doc=3280,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  2. Chilvers, A.: ¬The super-metadata framework for managing long-term access to digital data objects : a possible way forward with specific reference to the UK (2002) 0.06
    0.059403453 = product of:
      0.118806906 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4468) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4468,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4468)
          0.25 = coord(1/4)
        0.11292135 = weight(_text_:term in 4468) [ClassicSimilarity], result of:
          0.11292135 = score(doc=4468,freq=8.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5155283 = fieldWeight in 4468, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4468)
      0.5 = coord(2/4)
    
    Abstract
    This paper examines the reasons why existing management practices designed to cope with paper-based data objects appear to be inadequate for managing digital data objects (DDOs). The research described suggests the need for a reassessment of the way we view long-term access to DDOs. There is a need for a shift in emphasis which embraces the fluid nature of such objects and addresses the multifaceted issues involved in achieving such access. It would appear from the findings of this research that a conceptual framework needs to be developed which addresses a range of elements. The research achieved this by examining the issues facing stakeholders involved in this field; examining the need for and structure of a new generic conceptual framework, the super-metadata framework; identifying and discussing the issues central to the development of such a framework; and justifying the feasibility through the creation of an interactive cost model and stakeholder evaluation. The wider conceptual justification for such a framework is discussed and this involves an examination of the "public good" argument for the long-term retention of DDOs and the importance of selection in the management process. The paper concludes by considering the benefits to practitioners and the role they might play in testing the feasibility of such a framework. The paper also suggests possible avenues researchers may wish to consider to develop further the management of this field. (Note: This paper is derived from the author's Loughborough University phD thesis, "Managing long-term access to digital data objects: a metadata approach", written while holding a research studentship funded by the Department of Information Science.)
  3. Gorman, M.: Metadata or cataloguing? : a false choice (1999) 0.06
    0.057888947 = product of:
      0.115777895 = sum of:
        0.09033708 = weight(_text_:term in 6095) [ClassicSimilarity], result of:
          0.09033708 = score(doc=6095,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 6095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=6095)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 6095) [ClassicSimilarity], result of:
              0.05088163 = score(doc=6095,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 6095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6095)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Libraries, their collections, and bibliographic control are essential components of the provision of access to recorded knowledge. Cataloging is a primary method of bibliographic control. Full or traditional cataloging is very expensive, but relying on keyword searching is inadequate. Alternatives for a solution to cataloging needs for electronic resources including the use of metadata and the Dublin Core are examined. Many questions exist regarding the long-term future of today's electronic documents. Recommendations are made for preserving recorded knowledge and information in the electronic resources for future generations
    Source
    Journal of Internet cataloging. 2(1999) no.1, S.5-22
  4. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.05
    0.05182729 = product of:
      0.10365458 = sum of:
        0.013317495 = product of:
          0.05326998 = sum of:
            0.05326998 = weight(_text_:based in 4840) [ClassicSimilarity], result of:
              0.05326998 = score(doc=4840,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.37662423 = fieldWeight in 4840, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4840)
          0.25 = coord(1/4)
        0.09033708 = weight(_text_:term in 4840) [ClassicSimilarity], result of:
          0.09033708 = score(doc=4840,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 4840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
      0.5 = coord(2/4)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
  5. Stvilia, B.; Gasser, L.: Value-based metadata quality assessment (2008) 0.05
    0.051374927 = product of:
      0.102749854 = sum of:
        0.013317495 = product of:
          0.05326998 = sum of:
            0.05326998 = weight(_text_:based in 252) [ClassicSimilarity], result of:
              0.05326998 = score(doc=252,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.37662423 = fieldWeight in 252, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=252)
          0.25 = coord(1/4)
        0.08943236 = product of:
          0.17886472 = sum of:
            0.17886472 = weight(_text_:assessment in 252) [ClassicSimilarity], result of:
              0.17886472 = score(doc=252,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.6901275 = fieldWeight in 252, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0625 = fieldNorm(doc=252)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article proposes a method that allows a value-based assessment of metadata quality and construction of a baseline quality model. The method is illustrated on a large-scale, aggregated collection of simple Dublin core metadata records. An analysis of the collection suggests that metadata providers and end users may have different value structures for the same metadata. To promote better use of the metadata collection, value models for metadata in the collection should be made transparent to end users and end users should be allowed to participate in content creation and quality control processes.
  6. Cwiok, J.: ¬The defining element : a discussion of the creator element within metadata schemas (2005) 0.04
    0.038393825 = product of:
      0.07678765 = sum of:
        0.04516854 = weight(_text_:term in 5732) [ClassicSimilarity], result of:
          0.04516854 = score(doc=5732,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 5732, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5732)
        0.031619113 = product of:
          0.063238226 = sum of:
            0.063238226 = weight(_text_:assessment in 5732) [ClassicSimilarity], result of:
              0.063238226 = score(doc=5732,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2439969 = fieldWeight in 5732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The speed with change takes place is startling and has left the information community with little time to consider how the development of electronic resources and the metadata schemas created to describe them effect how we view a work and its components. In terms of the attribution of authorship in the context of electronic works, this is a salient point. How does one determine authorship of a complex electronic resource, which is the culmination of the work of a myriad of entities? How does one determine the authorship when the content of the electronic resource may change at any moment without warning? What is the semantic content of the element that denotes authorship or responsibility for an electronic resource and how does the term used determine the element's meaning? The conceptual difficulty in the definition of the Creator element is deciphering what exactly the metadata schema should be describing. We also need to establish what purpose the element is intended to serve. In essence, we are at a crossroads. It is clear that once a work is digitized it exists in a significantly different medium, but how do we provide access to it? It is necessary to critically assess the accuracy of digital surrogates and to note that webmasters have a significant amount of intellectual responsibility invested in the sites they create. The solution to the problem in the Creator element may lie in moving from the concept of "authorship" and "origination" to a concept of intellectual responsibility. Perhaps the problematic nature of the Creator element allows us to move forward in our assessment and treatment of knowledge. One solution may be to standardize the definitions within various element sets. As the semantic web continues to grow and librarians strive to catalog electronic resources, the establishment of standard definitions for elements is becoming more relevant and important.
  7. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.03
    0.03481059 = product of:
      0.06962118 = sum of:
        0.013160506 = product of:
          0.052642025 = sum of:
            0.052642025 = weight(_text_:based in 63) [ClassicSimilarity], result of:
              0.052642025 = score(doc=63,freq=10.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.37218451 = fieldWeight in 63, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=63)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 63) [ClassicSimilarity], result of:
          0.056460675 = score(doc=63,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 63, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
      0.5 = coord(2/4)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.
  8. Margaritopoulos, T.; Margaritopoulos, M.; Mavridis, I.; Manitsaris, A.: ¬A conceptual framework for metadata quality assessment (2008) 0.03
    0.03325464 = product of:
      0.13301855 = sum of:
        0.13301855 = sum of:
          0.094857335 = weight(_text_:assessment in 2643) [ClassicSimilarity], result of:
            0.094857335 = score(doc=2643,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.36599535 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=2643)
          0.038161222 = weight(_text_:22 in 2643) [ClassicSimilarity], result of:
            0.038161222 = score(doc=2643,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.23214069 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2643)
      0.25 = coord(1/4)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.03
    0.032109328 = product of:
      0.064218655 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 2192) [ClassicSimilarity], result of:
              0.033293735 = score(doc=2192,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.25 = coord(1/4)
        0.055895224 = product of:
          0.11179045 = sum of:
            0.11179045 = weight(_text_:assessment in 2192) [ClassicSimilarity], result of:
              0.11179045 = score(doc=2192,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.43132967 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  10. Stiller, J.; Olensky, M.; Petras, V.: ¬A framework for the evaluation of automatic metadata enrichments (2014) 0.03
    0.031474918 = product of:
      0.12589967 = sum of:
        0.12589967 = weight(_text_:frequency in 1587) [ClassicSimilarity], result of:
          0.12589967 = score(doc=1587,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.45543438 = fieldWeight in 1587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1587)
      0.25 = coord(1/4)
    
    Abstract
    Automatic enrichment of collections connects data to vocabularies, which supports the contextualization of content and adds searchable text to metadata. The paper introduces a framework of four dimensions (frequency, coverage, relevance and error rate) that measure both the suitability of the enrichment for the object and the enrichments' contribution to search success. To verify the framework, it is applied to the evaluation of automatic enrichments in the digital library Europeana. The analysis of 100 result sets and their corresponding queries (1,121 documents total) shows the framework is a valuable tool for guiding enrichments and determining the value of enrichment efforts.
  11. Cordeiro, M.I.: From library authority control to network authoritative metadata sources (2003) 0.03
    0.031173116 = product of:
      0.06234623 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 3083) [ClassicSimilarity], result of:
              0.023542227 = score(doc=3083,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 3083, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3083)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 3083) [ClassicSimilarity], result of:
          0.056460675 = score(doc=3083,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 3083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3083)
      0.5 = coord(2/4)
    
    Abstract
    Authority control is a quite recent term in the long history of cataloguing, although the underlying principle is among the very early principles of bibliographic control. Bibliographic control is a Field in transformation by the rapid expansion of the WWW, which has brought new problems to infonnation discovery and retrieval, creating new challenges and requirements in information management. In a comprehensive approach, authority control is presented as one of the most promising library activities in this respect. The evolution of work methods and standards for the sharing of authority files is reviewed, showing the imbalance in developments and practical achievements between name and subject authority, in an international perspective. The need to improve the network availability and usability of authority information assets in more effective and holistic ways is underlyned; and a new philosophy and scope is proposed for library authority work, based an the primacy of the linking function of authority data, and by expanding the finding, relating and informing functions of authority records. Some of these aspects are being addressed in several projects dealing with knowledge organization systems, notably to cope with multilingual needs and to enable semantic interoperability among different systems. Library practice itself should evolve in the same direction, thereby providing practical experience to inform new or improved principles and standards for authority work, while contributing to enhance local information services and to promote their involvement in the WWW environment.
  12. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.03
    0.031173116 = product of:
      0.06234623 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4048) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4048,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 4048) [ClassicSimilarity], result of:
          0.056460675 = score(doc=4048,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  13. Furner, J.: Definitions of "metadata" : a brief survey of international standards (2020) 0.02
    0.023954237 = product of:
      0.09581695 = sum of:
        0.09581695 = weight(_text_:term in 5912) [ClassicSimilarity], result of:
          0.09581695 = score(doc=5912,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 5912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=5912)
      0.25 = coord(1/4)
    
    Abstract
    A search on the term "metadata" in the International Organization for Standardization's Online Browsing Platform (ISO OBP) reveals that there are 96 separate ISO standards that provide definitions of the term. Between them, these standards supply 46 different definitions-a lack of standardization that we might not have expected, given the context. In fact, if we make creative use of Simpson's index of concentration (originally devised as a measure of ecological diversity) to measure the degree of standardization of definition in this case, we arrive at a value of 0.05, on a scale of zero to one. It is suggested, however, that the situation is not as problematic as it might seem: that low cross-domain levels of standardization of definition should not be cause for concern.
  14. Philips, J.T.: Metadata - information about electronic records (1995) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 4556) [ClassicSimilarity], result of:
          0.09033708 = score(doc=4556,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 4556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=4556)
      0.25 = coord(1/4)
    
    Abstract
    Metadata is a term to describe the information required to documents the characteristics of information contained within databases. Describes the elements that make up metadata. A number of software tools exist to help apply document management principles to electronic records but they have, so far, been inadequately applied. Describes 2 initiative currently under way to develop software to automate many records management functions. Understanding document management principles as applied to electronic records are vital to records managers
  15. Dempsey, L.: Metadata (1997) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 46) [ClassicSimilarity], result of:
          0.09033708 = score(doc=46,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 46, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=46)
      0.25 = coord(1/4)
    
    Abstract
    The term 'metadata' is becoming commonly used to refer to a variety of types of data which describe other data. A familiar example is bibliographic data, which describes a book or a serial article. Suggests that a routine definiton might be: 'metadata is data which describes attributes of a resource'. Gives some examples before looking at the Dublic Core, a simple response to the challenge of describing a wide range of network resources
  16. Dempsey, L.: Metadata (1997) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 107) [ClassicSimilarity], result of:
          0.09033708 = score(doc=107,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=107)
      0.25 = coord(1/4)
    
    Abstract
    The term 'metadata' is becoming commonly used to refer to a variety of types of data which describe other data. A familiar example is bibliographic data, which describes a book or a serial article. Suggests that a rountine definition might be: 'Metadata is data which describes attributes of a resource'. Provides examples to expand on this before looking at the Dublin Core, a simple set of elements for describing a wide range of network resources
  17. Wool, G.: ¬A mediation on metadata (1998) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 2210) [ClassicSimilarity], result of:
          0.09033708 = score(doc=2210,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 2210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2210)
      0.25 = coord(1/4)
    
    Abstract
    Metadata, or 'data about data', have been created and used for centuries in the print environment, though the term has its origins in the world of electronic information management. Presents the close relationship between traditional library cataloguing and the documentation of electronic data files (known as 'metadata'), showing that cataloguing is changing under the influence of information technology, but also that metadata provision is essentially an extension of traditional cataloguing processes
  18. Skare, R.: Paratext (2020) 0.02
    0.02258427 = product of:
      0.09033708 = sum of:
        0.09033708 = weight(_text_:term in 20) [ClassicSimilarity], result of:
          0.09033708 = score(doc=20,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 20, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=20)
      0.25 = coord(1/4)
    
    Abstract
    This article presents Gérard Genette's concept of the paratext by defining the term and by describing its characteristics. The use of the concept in disciplines other than literary studies and for media other than printed books is discussed. The last section shows the relevance of the concept for library and information science in general and for knowledge organization, in which paratext in particular is connected to the concept "metadata."
  19. Hunter, J.: MetaNet - a metadata term thesaurus to enable semantic interoperability between metadata domains (2001) 0.02
    0.019961866 = product of:
      0.07984746 = sum of:
        0.07984746 = weight(_text_:term in 6471) [ClassicSimilarity], result of:
          0.07984746 = score(doc=6471,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.3645336 = fieldWeight in 6471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6471)
      0.25 = coord(1/4)
    
    Abstract
    Metadata interoperability is a fundamental requirement for access to information within networked knowledge organization systems. The Harmony international digital library project [1] has developed a common underlying data model (the ABC model) to enable the scalable mapping of metadata descriptions across domains and media types. The ABC model [2] provides a set of basic building blocks for metadata modeling and recognizes the importance of 'events' to describe unambiguously metadata for objects with a complex history. To test and evaluate the interoperability capabilities of this model, we applied it to some real multimedia examples and analysed the results of mapping from the ABC model to various different metadata domains using XSLT [3]. This work revealed serious limitations in the ability of XSLT to support flexible dynamic semantic mapping. To overcome this, we developed MetaNet [4], a metadata term thesaurus which provides the additional semantic knowledge that is non-existent within declarative XML-encoded metadata descriptions. This paper describes MetaNet, its RDF Schema [5] representation and a hybrid mapping approach which combines the structural and syntactic mapping capabilities of XSLT with the semantic knowledge of MetaNet, to enable flexible and dynamic mapping among metadata standards.
  20. Rossiter, B.N.; Sillitoe, T.J.; Heather, M.A.: Database support for very large hypertexts (1990) 0.02
    0.019761236 = product of:
      0.079044946 = sum of:
        0.079044946 = weight(_text_:term in 48) [ClassicSimilarity], result of:
          0.079044946 = score(doc=48,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
      0.25 = coord(1/4)
    
    Abstract
    Current hypertext systems have been widely and effectively used on relatively small data volumes. Explores the potential of database technology for aiding the implementation of hypertext systems holding very large amounts of complex data. Databases meet many requirements of the hypermedium: persistent data management, large volumes, data modelling, multi-level architecture with abstractions and views, metadata integrated with operational data, short-term transaction processing and high-level end-user languages for searching and updating data. Describes a system implementing the storage, retrieval and recall of trails through hypertext comprising textual complex objects (to illustrate the potential for the use of data bases). Discusses weaknesses in current database systems for handling the complex modelling required

Authors

Years

Languages

  • e 181
  • d 10
  • f 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 174
  • el 21
  • m 11
  • s 9
  • b 2
  • x 2
  • More… Less…