Search (120 results, page 1 of 6)

  • × theme_ss:"Metadaten"
  1. Niininen, S.; Nykyri, S.; Suominen, O.: ¬The future of metadata : open, linked, and multilingual - the YSO case (2017) 0.04
    0.037839223 = product of:
      0.113517664 = sum of:
        0.04247789 = weight(_text_:documentation in 3707) [ClassicSimilarity], result of:
          0.04247789 = score(doc=3707,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.24053274 = fieldWeight in 3707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3707)
        0.07103977 = weight(_text_:great in 3707) [ClassicSimilarity], result of:
          0.07103977 = score(doc=3707,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.31105953 = fieldWeight in 3707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3707)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose The purpose of this paper is threefold: to focus on the process of multilingual concept scheme construction and the challenges involved; to addresses concrete challenges faced in the construction process and especially those related to equivalence between terms and concepts; and to briefly outlines the translation strategies developed during the process of concept scheme construction. Design/methodology/approach The analysis is based on experience acquired during the establishment of the Finnish thesaurus and ontology service Finto as well as the trilingual General Finnish Ontology YSO, both of which are being maintained and further developed at the National Library of Finland. Findings Although uniform resource identifiers can be considered language-independent, they do not render concept schemes and their construction free of language-related challenges. The fundamental issue with all the challenges faced is how to maintain consistency and predictability when the nature of language requires each concept to be treated individually. The key to such challenges is to recognise the function of the vocabulary and the needs of its intended users. Social implications Open science increases the transparency of not only research products, but also metadata tools. Gaining a deeper understanding of the challenges involved in their construction is important for a great variety of users - e.g. indexers, vocabulary builders and information seekers. Today, multilingualism is an essential aspect at both the national and international information society level. Originality/value This paper draws on the practical challenges faced in concept scheme construction in a trilingual environment, with a focus on "concept scheme" as a translation and mapping unit.
    Source
    Journal of documentation. 73(2017) no.3, S.451-465
  2. Tallerås, C.; Dahl, J.H.B.; Pharo, N.: User conceptualizations of derivative relationships in the bibliographic universe (2018) 0.04
    0.037839223 = product of:
      0.113517664 = sum of:
        0.04247789 = weight(_text_:documentation in 4247) [ClassicSimilarity], result of:
          0.04247789 = score(doc=4247,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.24053274 = fieldWeight in 4247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4247)
        0.07103977 = weight(_text_:great in 4247) [ClassicSimilarity], result of:
          0.07103977 = score(doc=4247,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.31105953 = fieldWeight in 4247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4247)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose Considerable effort is devoted to developing new models for organizing bibliographic metadata. However, such models have been repeatedly criticized for their lack of proper user testing. The purpose of this paper is to present a study on how non-experts in bibliographic systems map the bibliographic universe and, in particular, how they conceptualize relationships between independent but strongly related entities. Design/methodology/approach The study is based on an open concept-mapping task performed to externalize the conceptualizations of 98 novice students. The conceptualizations of the resulting concept maps are identified and analyzed statistically. Findings The study shows that the participants' conceptualizations have great variety, differing in detail and granularity. These conceptualizations can be categorized into two main groups according to derivative relationships: those that apply a single-entity model directly relating document entities and those (the majority) that apply a multi-entity model relating documents through a high-level collocating node. These high-level nodes seem to be most adequately interpreted either as superwork devices collocating documents belonging to the same bibliographic family or as devices collocating documents belonging to a shared fictional world. Originality/value The findings can guide the work to develop bibliographic standards. Based on the diversity of the conceptualizations, the findings also emphasize the need for more user testing of both conceptual models and the bibliographic end-user systems implementing those models.
    Source
    Journal of documentation. 74(2018) no.4, S.894-916
  3. Pole, T.: Contextual classification in the Metadata Object Manager (M.O.M.) (1999) 0.03
    0.026487455 = product of:
      0.079462364 = sum of:
        0.029734522 = weight(_text_:documentation in 6672) [ClassicSimilarity], result of:
          0.029734522 = score(doc=6672,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.16837291 = fieldWeight in 6672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6672)
        0.049727846 = weight(_text_:great in 6672) [ClassicSimilarity], result of:
          0.049727846 = score(doc=6672,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.21774168 = fieldWeight in 6672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6672)
      0.33333334 = coord(2/6)
    
    Abstract
    To Classify is (according to Webster's) "to distribute into classes; to arrange according to a system; to arrange in sets according to some method founded on common properties or characters." A model of classification is a type or category or (excuse the recursive definition) a class of classification "system" as mentioned in Webster's definition. One employs a classification model to implement a specific classification system. (E.g. we employ the hierarchical classification model to implement the Dewey Decimal System) An effective classification model must represent both the commonality (Webster's "common properties"), and also the differences among the items being classified. The commonality of each category or class defines a test to determine which items belong to the set that class represents. The relationships among the classes define the variability among the sets that the classification model can represent. Therefore, a classification model is more than an enumeration or other simple listing of the names of its classes. Our purpose in employing classification models is to build metadata systems that represent and manage knowledge, so that users of these systems we build can: quickly and accurately define (the commonality of) what knowledge they require, allowing the user great flexibility in how that desire is described; be presented existing information assets that best match the stated requirements; distinguish (the variability) among the candidates to determine their best choice(s), without actually having to examine the individual items themselves; retrieve the knowledge they need The MetaData model we present is Contextual Classification. It is a synthesis of several traditional metadata models, including controlled keyword indices, hierarchical classification, attribute value systems, Faceted Classification, and Evolutionary Faceted Classification. Research into building on line library systems of software and software documentation (Frakes and Pole, 19921 and Pole 19962) has shown the need and viability of combining the strengths, and minimizing the weaknesses of multiple metadata models in the development of information systems. The MetaData Object Manager (M.O.M.), a MetaData Warehouse (MDW) and editorial work flow system developed for the Thomson Financial Publishing Group, builds on this earlier research. From controlled keyword systems we borrow the idea of representing commonalties by defining formally defined subject areas or categories of information, which sets are represented by these categories names. From hierarchical classification, we borrow the concept of relating these categories and classes to each other to represent the variability in a collection of information sources. From attribute value we borrow the concept that each information source can be described in different ways, each in respect to the attribute of the information being described. From Faceted Classification we borrow the concept of relating the classes themselves into sets of classes, which a faceted classification system would describe as facets of terms. In this paper we will define the Contextual Classification model, comparing it to the traditional metadata models from which it has evolved. Using the MOM as an example, we will then discuss both the use of Contextual Classification is developing this system, and the organizational, performance and reliability
  4. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.03
    0.026234098 = product of:
      0.07870229 = sum of:
        0.059469044 = weight(_text_:documentation in 2837) [ClassicSimilarity], result of:
          0.059469044 = score(doc=2837,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.33674583 = fieldWeight in 2837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2837)
        0.01923325 = product of:
          0.0384665 = sum of:
            0.0384665 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
              0.0384665 = score(doc=2837,freq=2.0), product of:
                0.14203148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040559217 = queryNorm
                0.2708308 = fieldWeight in 2837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2837)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  5. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.03
    0.026234098 = product of:
      0.07870229 = sum of:
        0.059469044 = weight(_text_:documentation in 2848) [ClassicSimilarity], result of:
          0.059469044 = score(doc=2848,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.33674583 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.01923325 = product of:
          0.0384665 = sum of:
            0.0384665 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
              0.0384665 = score(doc=2848,freq=2.0), product of:
                0.14203148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040559217 = queryNorm
                0.2708308 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  6. Rice, R.: Applying DC to institutional data repositories (2008) 0.02
    0.019682894 = product of:
      0.05904868 = sum of:
        0.04805825 = weight(_text_:documentation in 2664) [ClassicSimilarity], result of:
          0.04805825 = score(doc=2664,freq=4.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.27213174 = fieldWeight in 2664, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.03125 = fieldNorm(doc=2664)
        0.010990429 = product of:
          0.021980857 = sum of:
            0.021980857 = weight(_text_:22 in 2664) [ClassicSimilarity], result of:
              0.021980857 = score(doc=2664,freq=2.0), product of:
                0.14203148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040559217 = queryNorm
                0.15476047 = fieldWeight in 2664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2664)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    DISC-UK DataShare (2007-2009), a project led by the University of Edinburgh and funded by JISC (Joint Information Systems Committee, UK), arises from an existing consortium of academic data support professionals working in the domain of social science datasets (Data Information Specialists Committee-UK). We are working together across four universities with colleagues engaged in managing open access repositories for e-prints. Our project supports 'early adopter' academics who wish to openly share datasets and presents a model for depositing 'orphaned datasets' that are not being deposited in subject-domain data archives/centres. Outputs from the project are intended to help to demystify data as complex objects in repositories, and assist other institutional repository managers in overcoming barriers to incorporating research data. By building on lessons learned from recent JISC-funded data repository projects such as SToRe and GRADE the project will help realize the vision of the Digital Repositories Roadmap, e.g. the milestone under Data, "Institutions need to invest in research data repositories" (Heery and Powell, 2006). Application of appropriate metadata is an important area of development for the project. Datasets are not different from other digital materials in that they need to be described, not just for discovery but also for preservation and re-use. The GRADE project found that for geo-spatial datasets, Dublin Core metadata (with geo-spatial enhancements such as a bounding box for the 'coverage' property) was sufficient for discovery within a DSpace repository, though more indepth metadata or documentation was required for re-use after downloading. The project partners are examining other metadata schemas such as the Data Documentation Initiative (DDI) versions 2 and 3, used primarily by social science data archives (Martinez, 2008). Crosswalks from the DDI to qualified Dublin Core are important for describing research datasets at the study level (as opposed to the variable level which is largely out of scope for this project). DataShare is benefiting from work of of the DRIADE project (application profile development for evolutionary biology) (Carrier, et al, 2007), eBank UK (developed an application profile for crystallography data) and GAP (Geospatial Application Profile, in progress) in defining interoperable Dublin Core qualified metadata elements and their application to datasets for each partner repository. The solution devised at Edinburgh for DSpace will be covered in the poster.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Long, K.; Thompson, S.; Potvin, S.; Rivero, M.: ¬The "wicked problem" of neutral description : toward a documentation approach to metadata standards (2017) 0.02
    0.0196197 = product of:
      0.1177182 = sum of:
        0.1177182 = weight(_text_:documentation in 5146) [ClassicSimilarity], result of:
          0.1177182 = score(doc=5146,freq=6.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.66658396 = fieldWeight in 5146, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0625 = fieldNorm(doc=5146)
      0.16666667 = coord(1/6)
    
    Abstract
    Increasingly, metadata standards have been recognized as constructed rather than neutral. In this article, we argue for the importance of a documentation approach to metadata standards creation as a codification of this growing recognition. By making design decisions explicit, the documentation approach dispels presumptions of neutrality and, drawing on the "wicked problems" theoretical framework, acknowledges the constructed nature of standards as "clumsy solutions."
  8. Weibel, S.; Miller, E.: Cataloging syntax and public policy meet in PICS (1997) 0.02
    0.018943941 = product of:
      0.11366364 = sum of:
        0.11366364 = weight(_text_:great in 1561) [ClassicSimilarity], result of:
          0.11366364 = score(doc=1561,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.49769527 = fieldWeight in 1561, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0625 = fieldNorm(doc=1561)
      0.16666667 = coord(1/6)
    
    Content
    The PICS, an initiative of W3C, is a technology that supports the association of descriptive labels with Web resources. By providing a single common transport syntax for metadata, PICS will support the growth of metadata systems (including library cataloguing) that are interoperable and widely supported in Web information systems. Within the PICS framework, a great diversity of resource description models can be implemented, from simple rating schemes to complex data content standards
  9. White, M.: ¬The value of taxonomies, thesauri and metadata in enterprise search (2016) 0.02
    0.016744237 = product of:
      0.10046542 = sum of:
        0.10046542 = weight(_text_:great in 2964) [ClassicSimilarity], result of:
          0.10046542 = score(doc=2964,freq=4.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.43990463 = fieldWeight in 2964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2964)
      0.16666667 = coord(1/6)
    
    Content
    Beitrag in einem Special issue: The Great Debate: "This House Believes that the Traditional Thesaurus has no Place in Modern Information Retrieval." [19 February 2015, 14:00-17:30 preceded by ISKO UK AGM and followed by networking, wine and nibbles; vgl.: http://www.iskouk.org/content/great-debate].
  10. Lam, V.-T.: Cataloging Internet resources : Why, what, how (2000) 0.02
    0.01657595 = product of:
      0.09945569 = sum of:
        0.09945569 = weight(_text_:great in 967) [ClassicSimilarity], result of:
          0.09945569 = score(doc=967,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.43548337 = fieldWeight in 967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=967)
      0.16666667 = coord(1/6)
    
    Abstract
    Internet resources have brought great excitement but also grave concerns to the library world, especially to the cataloging community. In spite of the various problematic aspects presented by Internet resources (poorly organized, lack of stability, variable quality), catalogers have decided that they are worth cataloging, in particular those meeting library selection criteria. This paper tries to trace the decade-long history of the library comrnunity's efforts in providing an effective way to catalog Internet resources. Basically, its olbjective is to answer the following questions: Why catalog? What to catalog? and, How to catalog. Some issues of cataloging electronic journals and developments of the Dublin Core Metadata system are also discussed.
  11. Sutton, S.A.: Conceptual design and deployment of a metadata framework for educational resources on the Internet (1999) 0.01
    0.014207955 = product of:
      0.08524773 = sum of:
        0.08524773 = weight(_text_:great in 4054) [ClassicSimilarity], result of:
          0.08524773 = score(doc=4054,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.37327147 = fieldWeight in 4054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4054)
      0.16666667 = coord(1/6)
    
    Abstract
    The metadata framework described in this article stems from a growing concern of the U.S. Department of Education and its National Library of Education that teachers, students, and parents are encountering increasing difficulty in accessing educational resources on the Internet even as those resources are becoming more abundant. This concern is joined by the realization that as Internet matures as a publishing environment, the successful management of resource repositories will hinge to a great extent on the intelligent use of metadata. We first explicate the conceptual foundations for the Gateway to Educational Materials (GEM) framework including the adoption of the Dublin Core Element Set as its base referent, and the extension of that set to meet the needs of the domain. We then discuss the complex of decisions that must be made regarding selection of the units of description and the structuring of an information space. The article concludes with a discussion of metadata generation, the association of metadata to the objects described, and a general description of the GEM system architecture
  12. Borbinha, J.: Authority control in the world of metadata (2004) 0.01
    0.014207955 = product of:
      0.08524773 = sum of:
        0.08524773 = weight(_text_:great in 5666) [ClassicSimilarity], result of:
          0.08524773 = score(doc=5666,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.37327147 = fieldWeight in 5666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5666)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper discusses the concept of "metadata" in the scope of the "digital library," two terms recently used in a great diversity of perspectives. It is not the intent to promote privilege of any particular view, but rather to help provide a better understanding of these multiple perspectives. The paper starts with a discussion of the concept of digital library, followed by an analysis of the concept of metadata. It continues with a discussion about the relationship of this concept with technology, services, and scenarios of application. The concluding remarks stress the three main arguments assumed for the relevance of the concept of metadata: the growing number of heterogeneous genres of information resources, the new emerging scenarios for interoperability, and issues related to the cost and complexity of current technology.
  13. Bearman, D.; Duff, W.: Grounding archival description in the functional requirements for evidence (1997) 0.01
    0.011327437 = product of:
      0.06796462 = sum of:
        0.06796462 = weight(_text_:documentation in 7908) [ClassicSimilarity], result of:
          0.06796462 = score(doc=7908,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.38485238 = fieldWeight in 7908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0625 = fieldNorm(doc=7908)
      0.16666667 = coord(1/6)
    
    Abstract
    Outlines the convergence of 2 approaches to archival description developed over 15 years and their application to the emerging issues in the creation, documentation, and management of electronic records. Relates the recently adopted General International Standard Archival Description to the University of Pittsburgh, Pennsylvania specification of the metadata required for evidence
  14. Wool, G.: ¬A mediation on metadata (1998) 0.01
    0.011327437 = product of:
      0.06796462 = sum of:
        0.06796462 = weight(_text_:documentation in 2210) [ClassicSimilarity], result of:
          0.06796462 = score(doc=2210,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.38485238 = fieldWeight in 2210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0625 = fieldNorm(doc=2210)
      0.16666667 = coord(1/6)
    
    Abstract
    Metadata, or 'data about data', have been created and used for centuries in the print environment, though the term has its origins in the world of electronic information management. Presents the close relationship between traditional library cataloguing and the documentation of electronic data files (known as 'metadata'), showing that cataloguing is changing under the influence of information technology, but also that metadata provision is essentially an extension of traditional cataloguing processes
  15. Dempsey, L.; Heery, R.: Metadata: a current view of practice and issues (1998) 0.01
    0.011327437 = product of:
      0.06796462 = sum of:
        0.06796462 = weight(_text_:documentation in 2302) [ClassicSimilarity], result of:
          0.06796462 = score(doc=2302,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.38485238 = fieldWeight in 2302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0625 = fieldNorm(doc=2302)
      0.16666667 = coord(1/6)
    
    Source
    Journal of documentation. 54(1998) no.2, S.145-172
  16. Kirschenbaum, M.: Documenting digital images : textual meta-data at the Blake Archive (1998) 0.01
    0.009911507 = product of:
      0.059469044 = sum of:
        0.059469044 = weight(_text_:documentation in 3287) [ClassicSimilarity], result of:
          0.059469044 = score(doc=3287,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.33674583 = fieldWeight in 3287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3287)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes the work undertaken by the Wiliam Blake Archive, Virginia University, to document the metadata tools for handling digital images of illustrations accompanying Blake's work. Images are encoded in both JPEG and TIFF formats. Image Documentation (ID) records are slotted into that portion of the JPEG file reserved for textual metadata. Because the textual content of the ID record now becomes part of the image file itself, the documentary metadata travels with the image even it it is downloaded from one file to another. The metadata is invisible when viewing the image but becomes accessible to users via the 'info' button on the control panel of the Java applet
  17. Dekkers, M.; Weibel, S.L.: State of the Dublin Core Metadata Initiative April 2003 (2003) 0.01
    0.009911507 = product of:
      0.059469044 = sum of:
        0.059469044 = weight(_text_:documentation in 2795) [ClassicSimilarity], result of:
          0.059469044 = score(doc=2795,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.33674583 = fieldWeight in 2795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2795)
      0.16666667 = coord(1/6)
    
    Abstract
    The Dublin Core Metadata Initiative continues to grow in participation and recognition as the predominant resource discovery metadata standard on the Internet. With its approval as ISO 15836, DC is firmly established as a foundation block of modular, interoperable metadata for distributed resources. This report summarizes developments in DCMI over the past year, including the annual conference, progress of working groups, new developments in encoding methods, and advances in documentation and dissemination. New developments in broadening the community to commercial users of metadata are discussed, and plans for an international network of national affiliates are described.
  18. Hillmann, D.: Metadata quality : from evaluation to augmentation (2008) 0.01
    0.009911507 = product of:
      0.059469044 = sum of:
        0.059469044 = weight(_text_:documentation in 800) [ClassicSimilarity], result of:
          0.059469044 = score(doc=800,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.33674583 = fieldWeight in 800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.0546875 = fieldNorm(doc=800)
      0.16666667 = coord(1/6)
    
    Abstract
    The conversation about metadata quality has developed slowly in libraries, hindered by unexamined assumptions about metadata carrying over from experience in the MARC environment. In the wider world, discussions about functionality must drive discussions about how quality might be determined and ensured. Because the quality-enforcing structures present in the MARC world-mature standards, common documentation, and bibliographic utilities-are lacking in the metadata world, metadata practitioners desiring to improve the quality of metadata used in their libraries must develop and proliferate their own processes of evaluation and transformation to support essential interoperability. In this article, the author endeavors to describe how those processes might be established and sustained to support metadata quality improvement.
  19. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.01
    0.009471971 = product of:
      0.05683182 = sum of:
        0.05683182 = weight(_text_:great in 1076) [ClassicSimilarity], result of:
          0.05683182 = score(doc=1076,freq=2.0), product of:
            0.22838 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.040559217 = queryNorm
            0.24884763 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.16666667 = coord(1/6)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
  20. Hill, L.L.; Janée, G.; Dolin, R.; Frew, J.; Larsgaard, M.: Collection metadata solutions for digital library applications (1999) 0.01
    0.008495579 = product of:
      0.050973468 = sum of:
        0.050973468 = weight(_text_:documentation in 4053) [ClassicSimilarity], result of:
          0.050973468 = score(doc=4053,freq=2.0), product of:
            0.1765992 = queryWeight, product of:
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.040559217 = queryNorm
            0.28863928 = fieldWeight in 4053, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.354108 = idf(docFreq=1544, maxDocs=44218)
              0.046875 = fieldNorm(doc=4053)
      0.16666667 = coord(1/6)
    
    Abstract
    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemes that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadatada 'registers' the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system

Years

Languages

  • e 108
  • d 10
  • sp 1
  • More… Less…

Types

  • a 109
  • el 7
  • s 5
  • m 4
  • b 2
  • n 1
  • More… Less…