Search (241 results, page 1 of 13)

  • × theme_ss:"Metadaten"
  1. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.07
    0.069985814 = product of:
      0.13997163 = sum of:
        0.036211025 = weight(_text_:data in 57) [ClassicSimilarity], result of:
          0.036211025 = score(doc=57,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 57) [ClassicSimilarity], result of:
            0.05934933 = score(doc=57,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 57, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=57)
          0.044411276 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
            0.044411276 = score(doc=57,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 57, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=57)
      0.5 = coord(2/4)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
  2. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.07
    0.069985814 = product of:
      0.13997163 = sum of:
        0.036211025 = weight(_text_:data in 2848) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2848,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 2848) [ClassicSimilarity], result of:
            0.05934933 = score(doc=2848,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
          0.044411276 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
            0.044411276 = score(doc=2848,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
      0.5 = coord(2/4)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  3. Rossiter, B.N.; Sillitoe, T.J.; Heather, M.A.: Database support for very large hypertexts (1990) 0.06
    0.06274002 = product of:
      0.12548004 = sum of:
        0.09580538 = weight(_text_:data in 48) [ClassicSimilarity], result of:
          0.09580538 = score(doc=48,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.64702475 = fieldWeight in 48, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 48) [ClassicSimilarity], result of:
              0.05934933 = score(doc=48,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 48, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Current hypertext systems have been widely and effectively used on relatively small data volumes. Explores the potential of database technology for aiding the implementation of hypertext systems holding very large amounts of complex data. Databases meet many requirements of the hypermedium: persistent data management, large volumes, data modelling, multi-level architecture with abstractions and views, metadata integrated with operational data, short-term transaction processing and high-level end-user languages for searching and updating data. Describes a system implementing the storage, retrieval and recall of trails through hypertext comprising textual complex objects (to illustrate the potential for the use of data bases). Discusses weaknesses in current database systems for handling the complex modelling required
  4. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.06
    0.05998784 = product of:
      0.11997568 = sum of:
        0.031038022 = weight(_text_:data in 4748) [ClassicSimilarity], result of:
          0.031038022 = score(doc=4748,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.088937655 = sum of:
          0.05087085 = weight(_text_:processing in 4748) [ClassicSimilarity], result of:
            0.05087085 = score(doc=4748,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.26835677 = fieldWeight in 4748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=4748)
          0.038066804 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
            0.038066804 = score(doc=4748,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.23214069 = fieldWeight in 4748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4748)
      0.5 = coord(2/4)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  5. Jeffery, K.G.; Bailo, D.: EPOS: using metadata in geoscience (2014) 0.06
    0.056612108 = product of:
      0.113224216 = sum of:
        0.08778879 = weight(_text_:data in 1581) [ClassicSimilarity], result of:
          0.08778879 = score(doc=1581,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 1581, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1581)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 1581) [ClassicSimilarity], result of:
              0.05087085 = score(doc=1581,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 1581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1581)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    One of the key aspects of the approaching data-intensive science era is integration of data through interoperability of systems providing data products or visualisation and processing services. Far from being simple, interoperability requires robust and scalable e-infrastructures capable of supporting it. In this work we present the case of EPOS, a project for data integration in the field of Earth Sciences. We describe the design of its e-infrastructure and show its main characteristics. One of the main elements enabling the system to integrate data, data products and services is the metadata catalog based on the CERIF metadata model. Such a model, modified to fit into the general e-infrastructure design, is part of a three-layer metadata architecture. CERIF guarantees a robust handling of metadata, which is in this case the key to the interoperability and to one of the feature of the EPOS system: the possibility of carrying on data intensive science orchestrating the distributed resources made available by EPOS data providers and stakeholders.
  6. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.06
    0.055884153 = product of:
      0.111768305 = sum of:
        0.08179237 = weight(_text_:data in 2192) [ClassicSimilarity], result of:
          0.08179237 = score(doc=2192,freq=20.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5523875 = fieldWeight in 2192, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.029975938 = product of:
          0.059951875 = sum of:
            0.059951875 = weight(_text_:processing in 2192) [ClassicSimilarity], result of:
              0.059951875 = score(doc=2192,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3162615 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Text processing (Computer science)
    Subject
    Text processing (Computer science)
  7. Masanès, J.; Lupovici, C.: Preservation metadata : the NEDLIB's proposal Bibliothèque Nationale de France (2001) 0.05
    0.048907444 = product of:
      0.09781489 = sum of:
        0.053759433 = weight(_text_:data in 6013) [ClassicSimilarity], result of:
          0.053759433 = score(doc=6013,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 6013, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=6013)
        0.044055454 = product of:
          0.08811091 = sum of:
            0.08811091 = weight(_text_:processing in 6013) [ClassicSimilarity], result of:
              0.08811091 = score(doc=6013,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4648076 = fieldWeight in 6013, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6013)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Preservation of digital documents for the long term requires above all to solve the problem of technological obsolescence. Accessing to digital documents in 20 or loo years will be impossible if we, or our successor, can't process the bit stream underlying digital documents. We can be sure that the modality of data processing will be different in 20 or loo years. It is then our task to collect key information about today's data processing to ensure future access to these documents. In this paper we present the NEDLIB's proposal for a preservation metadata set. This set gathers core metadata that are mandatory for preservation management purposes. We propose to define 8 metadata elements and 38 sub-elements following the OAIS taxonomy of information object. A layered information analysis of the digital document is proposed in order to list all information involved in the data processing of the bit stream. These metadata elements are intended to be populate, as much as possible, in an automatic way to make it possible to handle large amounts of documents
  8. Taniguchi, S.: Recording evidence in bibliographic records and descriptive metadata (2005) 0.05
    0.04753036 = product of:
      0.09506072 = sum of:
        0.07602732 = weight(_text_:data in 3565) [ClassicSimilarity], result of:
          0.07602732 = score(doc=3565,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.513453 = fieldWeight in 3565, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3565)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 3565) [ClassicSimilarity], result of:
              0.038066804 = score(doc=3565,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 3565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3565)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article recording evidence for data values in addition to the values themselves in bibliographic records and descriptive metadata is proposed, with the aim of improving the expressiveness and reliability of those records and metadata. Recorded evidence indicates why and how data values are recorded for elements. Recording the history of changes in data values is also proposed, with the aim of reinforcing recorded evidence. First, evidence that can be recorded is categorized into classes: identifiers of rules or tasks, action descriptions of them, and input and output data of them. Dates of recording values and evidence are an additional class. Then, the relative usefulness of evidence classes and also levels (i.e., the record, data element, or data value level) to which an individual evidence class is applied, is examined. Second, examples that can be viewed as recorded evidence in existing bibliographic records and current cataloging rules are shown. Third, some examples of bibliographic records and descriptive metadata with notes of evidence are demonstrated. Fourth, ways of using recorded evidence are addressed.
    Date
    18. 6.2005 13:16:22
  9. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.04
    0.04486528 = product of:
      0.08973056 = sum of:
        0.053759433 = weight(_text_:data in 3274) [ClassicSimilarity], result of:
          0.053759433 = score(doc=3274,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 3274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
        0.035971127 = product of:
          0.071942255 = sum of:
            0.071942255 = weight(_text_:processing in 3274) [ClassicSimilarity], result of:
              0.071942255 = score(doc=3274,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3795138 = fieldWeight in 3274, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3274)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    The papers are organized in several sessions and tracks: general track on ontology evolution, engineering, and frameworks, semantic Web and metadata extraction, modelling, interoperability and exploratory search, data analysis, reuse and visualization; track on digital libraries, information retrieval, linked and social data; track on metadata and semantics for open repositories, research information systems and data infrastructure; track on metadata and semantics for agriculture, food and environment; track on metadata and semantics for cultural collections and applications; track on European and national projects.
    LCSH
    Text processing (Computer science)
    Subject
    Text processing (Computer science)
  10. Rice, R.: Applying DC to institutional data repositories (2008) 0.04
    0.042184092 = product of:
      0.084368184 = sum of:
        0.07167925 = weight(_text_:data in 2664) [ClassicSimilarity], result of:
          0.07167925 = score(doc=2664,freq=24.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 2664, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2664)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 2664) [ClassicSimilarity], result of:
              0.025377871 = score(doc=2664,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 2664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2664)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    DISC-UK DataShare (2007-2009), a project led by the University of Edinburgh and funded by JISC (Joint Information Systems Committee, UK), arises from an existing consortium of academic data support professionals working in the domain of social science datasets (Data Information Specialists Committee-UK). We are working together across four universities with colleagues engaged in managing open access repositories for e-prints. Our project supports 'early adopter' academics who wish to openly share datasets and presents a model for depositing 'orphaned datasets' that are not being deposited in subject-domain data archives/centres. Outputs from the project are intended to help to demystify data as complex objects in repositories, and assist other institutional repository managers in overcoming barriers to incorporating research data. By building on lessons learned from recent JISC-funded data repository projects such as SToRe and GRADE the project will help realize the vision of the Digital Repositories Roadmap, e.g. the milestone under Data, "Institutions need to invest in research data repositories" (Heery and Powell, 2006). Application of appropriate metadata is an important area of development for the project. Datasets are not different from other digital materials in that they need to be described, not just for discovery but also for preservation and re-use. The GRADE project found that for geo-spatial datasets, Dublin Core metadata (with geo-spatial enhancements such as a bounding box for the 'coverage' property) was sufficient for discovery within a DSpace repository, though more indepth metadata or documentation was required for re-use after downloading. The project partners are examining other metadata schemas such as the Data Documentation Initiative (DDI) versions 2 and 3, used primarily by social science data archives (Martinez, 2008). Crosswalks from the DDI to qualified Dublin Core are important for describing research datasets at the study level (as opposed to the variable level which is largely out of scope for this project). DataShare is benefiting from work of of the DRIADE project (application profile development for evolutionary biology) (Carrier, et al, 2007), eBank UK (developed an application profile for crystallography data) and GAP (Geospatial Application Profile, in progress) in defining interoperable Dublin Core qualified metadata elements and their application to datasets for each partner repository. The solution devised at Edinburgh for DSpace will be covered in the poster.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Tani, A.; Candela, L.; Castelli, D.: Dealing with metadata quality : the legacy of digital library efforts (2013) 0.04
    0.040442396 = product of:
      0.08088479 = sum of:
        0.051210128 = weight(_text_:data in 2662) [ClassicSimilarity], result of:
          0.051210128 = score(doc=2662,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 2662, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2662)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 2662) [ClassicSimilarity], result of:
              0.05934933 = score(doc=2662,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 2662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2662)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital library developments is expected to contribute to the ongoing discussion on data and metadata quality occurring in the emerging yet more general framework of data infrastructures.
    Source
    Information processing and management. 49(2013) no.6, S.1194-1205
  12. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.04
    0.03908867 = product of:
      0.07817734 = sum of:
        0.036211025 = weight(_text_:data in 5236) [ClassicSimilarity], result of:
          0.036211025 = score(doc=5236,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 5236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5236)
        0.041966315 = product of:
          0.08393263 = sum of:
            0.08393263 = weight(_text_:processing in 5236) [ClassicSimilarity], result of:
              0.08393263 = score(doc=5236,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4427661 = fieldWeight in 5236, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5236)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
  13. Caplan, P.; Guenther, R.: Metadata for Internet resources : the Dublin Core Metadata Elements Set and its mapping to USMARC (1996) 0.04
    0.038636878 = product of:
      0.077273756 = sum of:
        0.04138403 = weight(_text_:data in 2408) [ClassicSimilarity], result of:
          0.04138403 = score(doc=2408,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 2408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2408)
        0.03588973 = product of:
          0.07177946 = sum of:
            0.07177946 = weight(_text_:22 in 2408) [ClassicSimilarity], result of:
              0.07177946 = score(doc=2408,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4377287 = fieldWeight in 2408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2408)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper discuesses the goals and outcome of the OCLC/NCSA Metadata Workshop held March 1-3, 1995 in Dublin Ohio. The resulting proposed "Dublin Core" Metadata Elements Set is described briefly. An attempt is made to map the Dublin Core data elements to USMARC; problems and outstanding questions are noted.
    Date
    13. 1.2007 18:31:22
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.43-58
  14. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.04
    0.03738249 = product of:
      0.07476498 = sum of:
        0.062076043 = weight(_text_:data in 367) [ClassicSimilarity], result of:
          0.062076043 = score(doc=367,freq=18.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 367, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=367)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.025377871 = score(doc=367,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
  15. Vellucci, S.L.: Metadata and authority control (2000) 0.04
    0.03670788 = product of:
      0.07341576 = sum of:
        0.051210128 = weight(_text_:data in 180) [ClassicSimilarity], result of:
          0.051210128 = score(doc=180,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 180, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=180)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 180) [ClassicSimilarity], result of:
              0.044411276 = score(doc=180,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A variety of information communities have developed metadata schemes to meet the needs of their own users. The ability of libraries to incorporate and use multiple metadata schemes in current library systems will depend on the compatibility of imported data with existing catalog data. Authority control will play an important role in metadata interoperability. In this article, I discuss factors for successful authority control in current library catalogs, which include operation in a well-defined and bounded universe, application of principles and standard practices to access point creation, reference to authoritative lists, and bibliographic record creation by highly trained individuals. Metadata characteristics and environmental models are examined and the likelihood of successful authority control is explored for a variety of metadata environments.
    Date
    10. 9.2000 17:38:22
  16. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.04
    0.03670788 = product of:
      0.07341576 = sum of:
        0.051210128 = weight(_text_:data in 3283) [ClassicSimilarity], result of:
          0.051210128 = score(doc=3283,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 3283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.044411276 = score(doc=3283,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  17. Hooland, S. van; Bontemps, Y.; Kaufman, S.: Answering the call for more accountability : applying data profiling to museum metadata (2008) 0.04
    0.036396418 = product of:
      0.072792836 = sum of:
        0.053759433 = weight(_text_:data in 2644) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2644,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2644)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 2644) [ClassicSimilarity], result of:
              0.038066804 = score(doc=2644,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 2644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2644)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Although the issue of metadata quality is recognized as an important topic within the metadata research community, the cultural heritage sector has been slow to develop methodologies, guidelines and tools for addressing this topic in practice. This paper concentrates on metadata quality specifically within the museum sector and describes the potential of data-profiling techniques for metadata quality evaluation. A case study illustrates the application of a generalpurpose data-profiling tool on a large collection of metadata records from an ethnographic collection. After an analysis of the results of the case-study the paper reviews further steps in our research and presents the implementation of a metadata quality tool within an open-source collection management software.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Toth, M.B.; Emery, D.: Applying DCMI elements to digital images and text in the Archimedes Palimpsest Program (2008) 0.03
    0.033795606 = product of:
      0.06759121 = sum of:
        0.05173004 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
          0.05173004 = score(doc=2651,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 2651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 2651) [ClassicSimilarity], result of:
              0.03172234 = score(doc=2651,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The digitized version of the only extant copy of Archimedes' key mathematical and scientific works contains over 6,500 images and 130 pages of transcriptions. Metadata is essential for managing, integrating and accessing these digital resources in the Web 2.0 environment. The Dublin Core Metadata Element Set meets many of our needs. It offers the needed flexibility and applicability to a variety of data sets containing different texts and images in a dynamic technical environment. The program team has continued to refine its data dictionary and elements based on the Dublin Core standard and feedback from the Dublin Core community since the 2006 Dublin Core Conference. This presentation cites the application and utility of the DCMI Standards during the final phase of this decade-long program. Since the 2006 conference, the amount of data has grown tenfold with new imaging techniques. Use of the DCMI Standards for integration across digital images and transcriptions will allow the hosting and integration of this data set and other cultural works across service providers, libraries and cultural institutions.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  19. McCallum, S.H.: ¬An introduction to the Metadata Object Description Schema (MODS) (2004) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 81) [ClassicSimilarity], result of:
          0.04138403 = score(doc=81,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 81) [ClassicSimilarity], result of:
              0.050755743 = score(doc=81,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 81, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper provides an introduction to the Metadata Object Description Schema (MODS), a MARC21 compatible XML schema for descriptive metadata. It explains the requirements that the schema targets and the special features that differentiate it from MARC, such as user-oriented tags, regrouped data elements, linking, recursion, and accommodations for electronic resources.
    Source
    Library hi tech. 22(2004) no.1, S.82-88
  20. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 1953) [ClassicSimilarity], result of:
          0.04138403 = score(doc=1953,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 1953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1953)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.050755743 = score(doc=1953,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Scientific repositories create a new environment for studying traditional information science issues. The interaction between indexing terms provided by users and controlled vocabularies continues to be an area of debate and study. This article reports and analyzes findings from a study that mapped the relationships between free text keywords and controlled vocabulary terms used in the sciences. Based on this study's findings recommendations are made about which vocabularies may be better to use in scientific data repositories.
    Date
    29. 5.2015 19:09:22

Authors

Years

Languages

  • e 221
  • d 15
  • f 1
  • i 1
  • pt 1
  • sp 1
  • More… Less…

Types

  • a 210
  • el 30
  • m 13
  • s 11
  • b 2
  • n 2
  • x 2
  • More… Less…