Search (327 results, page 16 of 17)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  1. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.00
    1.8604532E-4 = product of:
      0.0027906797 = sum of:
        0.0027906797 = product of:
          0.0055813594 = sum of:
            0.0055813594 = weight(_text_:information in 1256) [ClassicSimilarity], result of:
              0.0055813594 = score(doc=1256,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.10971737 = fieldWeight in 1256, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
  2. Miller, S.J.: Metadata for digital collections : a how-to-do-it manual (2011) 0.00
    1.8604532E-4 = product of:
      0.0027906797 = sum of:
        0.0027906797 = product of:
          0.0055813594 = sum of:
            0.0055813594 = weight(_text_:information in 4911) [ClassicSimilarity], result of:
              0.0055813594 = score(doc=4911,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.10971737 = fieldWeight in 4911, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4911)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    LCSH
    Cataloging of electronic information resources / Standards
    Subject
    Cataloging of electronic information resources / Standards
  3. Howarth, L.C.: Designing a "Human Understandable" metalevel ontology for enhancing resource discovery in knowledge bases (2000) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 114) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=114,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=114)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    With the explosion of digitized resources accessible via networked information systems, and the corresponding proliferation of general purpose and domain-specific schemes, metadata have assumed a special prominence. While recent work emanating from the World Wide Web Consortium (W3C) has focused on the Resource Description Framework (RDF) to support the interoperability of metadata standards - thus converting metatags from diverse domains from merely "machine-readable" to "machine-understandable" - the next iteration, to "human-understandable," remains a challenge. This apparent gap provides a framework for three-phase research (Howarth, 1999) to develop a tool which will provide a "human-understandable" front-end search assist to any XML-compliant metadata scheme. Findings from phase one, the analyses and mapping of seven metadata schemes, identify the particular challenges of designing a common "namespace", populated with element tags which are appropriately descriptive, yet readily understood by a lay searcher, when there is little congruence within, and a high degree of variability across, the metadata schemes under study. Implications for the subsequent design and testing of both the proposed "metalevel ontology" (phase two), and the prototype search assist tool (phase three) are examined
  4. Tennis, J.T.: Data collection for controlled vocabulary interoperability : Dublin core audience element (2003) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 1253) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=1253,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Bulletin of the American Society for Information Science. 29(2003) no.2, S.20-23
  5. Chilvers, A.: ¬The super-metadata framework for managing long-term access to digital data objects : a possible way forward with specific reference to the UK (2002) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 4468) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=4468,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 4468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4468)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This paper examines the reasons why existing management practices designed to cope with paper-based data objects appear to be inadequate for managing digital data objects (DDOs). The research described suggests the need for a reassessment of the way we view long-term access to DDOs. There is a need for a shift in emphasis which embraces the fluid nature of such objects and addresses the multifaceted issues involved in achieving such access. It would appear from the findings of this research that a conceptual framework needs to be developed which addresses a range of elements. The research achieved this by examining the issues facing stakeholders involved in this field; examining the need for and structure of a new generic conceptual framework, the super-metadata framework; identifying and discussing the issues central to the development of such a framework; and justifying the feasibility through the creation of an interactive cost model and stakeholder evaluation. The wider conceptual justification for such a framework is discussed and this involves an examination of the "public good" argument for the long-term retention of DDOs and the importance of selection in the management process. The paper concludes by considering the benefits to practitioners and the role they might play in testing the feasibility of such a framework. The paper also suggests possible avenues researchers may wish to consider to develop further the management of this field. (Note: This paper is derived from the author's Loughborough University phD thesis, "Managing long-term access to digital data objects: a metadata approach", written while holding a research studentship funded by the Department of Information Science.)
  6. Dawson, A.; Hamilton, V.: Optimising metadata to make high-value content more accessible to Google users (2006) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 5598) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=5598,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - This paper aims to show how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Design/methodology/approach - The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings - The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value - The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.
  7. Hawking, D.; Zobel, J.: Does topic metadata help with Web search? (2007) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 204) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=204,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 204, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=204)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.5, S.613-628
  8. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 1249) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=1249,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 1249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1249)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
  9. Nichols, D.M.; Paynter, G.W.; Chan, C.-H.; Bainbridge, D.; McKay, D.; Twidale, M.B.; Blandford, A.: Experiences in deploying metadata analysis tools for institutional repositories (2009) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2986) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2986,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2986, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2986)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Current institutional repository software provides few tools to help metadata librarians understand and analyse their collections. In this paper, we compare and contrast metadata analysis tools that were developed simultaneously, but independently, at two New Zealand institutions during a period of national investment in research repositories: the Metadata Analysis Tool (MAT) at The University of Waikato, and the Kiwi Research Information Service (KRIS) at the National Library of New Zealand. The tools have many similarities: they are convenient, online, on-demand services that harvest metadata using OAI-PMH, they were developed in response to feedback from repository administrators, and they both help pinpoint specific metadata errors as well as generating summary statistics. They also have significant differences: one is a dedicated tool while the other is part of a wider access tool; one gives a holistic view of the metadata while the other looks for specific problems; one seeks patterns in the data values while the other checks that those values conform to metadata standards. Both tools work in a complementary manner to existing web-based administration tools. We have observed that discovery and correction of metadata errors can be quickly achieved by switching web browser views from the analysis tool to the repository interface, and back. We summarise the findings from both tools' deployment into a checklist of requirements for metadata analysis tools.
  10. Syn, S.Y.; Spring, M.B.: Finding subject terms for classificatory metadata from user-generated social tags (2013) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 745) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=745,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 745, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=745)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.964-980
  11. Rousidis, D.; Garoufallou, E.; Balatsoukas, P.; Sicilia, M.-A.: Evaluation of metadata in research data repositories : the case of the DC.Subject Element (2015) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2392) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2392,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2392)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Series
    Communications in computer and information science; 544
  12. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3667) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3667,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3667)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Series
    Communications in computer and information science; 544
  13. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 3895) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=3895,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  14. Çelebi, A.; Özgür, A.: Segmenting hashtags and analyzing their grammatical structure (2018) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 4221) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=4221,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 4221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4221)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.5, S.675-686
  15. Hansson, K.; Dahlgren, A.: Open research data repositories : practices, norms, and metadata for sharing images (2022) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 472) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=472,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=472)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.303-316
  16. Lee, S.: Pidgin metadata framework as a mediator for metadata interoperability (2021) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 654) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=654,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=654)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    A pidgin metadata framework based on the concept of pidgin metadata is proposed to complement the limitations of existing approaches to metadata interoperability and to achieve more reliable metadata interoperability. The framework consists of three layers, with a hierarchical structure, and reflects the semantic and structural characteristics of various metadata. Layer 1 performs both an external function, serving as an anchor for semantic association between metadata elements, and an internal function, providing semantic categories that can encompass detailed elements. Layer 2 is an arbitrary layer composed of substantial elements from existing metadata and performs a function in which different metadata elements describing the same or similar aspects of information resources are associated with the semantic categories of Layer 1. Layer 3 implements the semantic relationships between Layer 1 and Layer 2 through the Resource Description Framework syntax. With this structure, the pidgin metadata framework can establish the criteria for semantic connection between different elements and fully reflect the complexity and heterogeneity among various metadata. Additionally, it is expected to provide a bibliographic environment that can achieve more reliable metadata interoperability than existing approaches by securing the communication between metadata.
  17. Qin, C.; Liu, Y.; Ma, X.; Chen, J.; Liang, H.: Designing for serendipity in online knowledge communities : an investigation of tag presentation formats and openness to experience (2022) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 664) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=664,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.10, S.1401-1417
  18. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 1042) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=1042,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.9, S.1124-1139
  19. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.00
    1.6278966E-4 = product of:
      0.0024418447 = sum of:
        0.0024418447 = product of:
          0.0048836893 = sum of:
            0.0048836893 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
              0.0048836893 = score(doc=1210,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0960027 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
  20. Bazillion, R.J.; Caplan, P.: Metadata fundamentals for all librarians (2003) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 3197) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=3197,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez.: JASIST 56(2005) no.13, S.1264 (W. Koehler: "Priscilla Caplan provides us with a sweeping but very welcome survey of the various approaches to metadata in practice or proposed in libraries and archives today. One of the key strengths of the book and paradoxically one of its key weaknesses is that the work is descriptive in nature. While relationships between one system and another may be noted, no general conclusions of a practical or theoretical nature are drawn of the relative merits of one metadata or metametadata scheure as against another. That said, let us remember that this is an American Library Association publication, published as a descriptive resource. Caplan does very well what she sets out to do. The work is divided into two parts: "Principles and Practice" and "Metadata Schemes," and is further subdivided into eighteen chapters. The book begins with short yet more than adequate chapters defining terms, vocabularies, and concepts. It discusses interoperability and the various levels of quality among systems. Perhaps Chapter 5, "Metadata and the Web" is the weakest chapter of the work. There is a brief discussion of how search engines work and some of the more recent initiatives (e.g., the Semantic Web) to develop better retrieval agents. The chapter is weck not in its description but in what it fails to discuss. The second section, "Metadata Schemes," which encompasses chapters six through eighteen, is particularly rich. Thirteen different metadata or metametadata schema are described to provide the interested librarian with a better than adequate introduction to the purpose, application, and operability of each metadata scheme. These are: library cataloging (chiefly MARC), TEI, Dublin Core, Archival Description and EAD, Art and Architecture, GILS, Education, ONIX, Geospatial, Data Documentation Initiative, Administrative Metadata, Structural Metadata, and Rights Metadata. The last three chapters introduce concepts heretofore "foreign" to the realm of the catalog or metadata. Descriptive metadata was . . . intended to help in finding, discovering, and identifying an information resource." (p. 151) Administrative metadata is an aid to ". . . the owners or caretakers of the resource." Structural metadata describe the relationships of data elements. Rights metadata describe (or as Caplan points out, may describe, as definition is still as yet ambiguous) end user rights to use and reproduce material in digital format. Keeping in mind that the work is intended for the general practitioner librarian, the book has a particularly useful glossary and index. Caplan also provides useful suggestions for additional reading at the end of each chapter. 1 intend to adopt Metadata Fundamentals for All Librarians when next I teach a digital cataloging course. Caplan's book provides an excellent introduction to the basic concepts. It is, however, not a "cookbook" nor a guidebook into the complexities of the application of any metadata scheme."

Authors

Years

Types

  • a 287
  • el 35
  • m 21
  • s 14
  • n 4
  • b 2
  • x 1
  • More… Less…

Subjects