Search (386 results, page 1 of 20)

  • × type_ss:"el"
  1. Payette, S.; Blanchi, C.; Lagoze, C.; Overly, E.A.: Interoperability for digital objects and repositories : the Cornell/CNRI experiments (1999) 0.10
    0.10396706 = sum of:
      0.06289634 = product of:
        0.18868901 = sum of:
          0.18868901 = weight(_text_:objects in 1248) [ClassicSimilarity], result of:
            0.18868901 = score(doc=1248,freq=12.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.5753733 = fieldWeight in 1248, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.03125 = fieldNorm(doc=1248)
        0.33333334 = coord(1/3)
      0.041070722 = product of:
        0.082141444 = sum of:
          0.082141444 = weight(_text_:work in 1248) [ClassicSimilarity], result of:
            0.082141444 = score(doc=1248,freq=10.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.3627123 = fieldWeight in 1248, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.03125 = fieldNorm(doc=1248)
        0.5 = coord(1/2)
    
    Abstract
    For several years the Digital Library Research Group at Cornell University and the Corporation for National Research Initiatives (CNRI) have been engaged in research focused on the design and development of infrastructures for open architecture, confederated digital libraries. The goal of this effort is to achieve interoperability and extensibility of digital library systems through the definition of key digital library services and their open interfaces, allowing flexible interaction of existing services and augmentation of the infrastructure with new services. Some aspects of this research have included the development and deployment of the Dienst software, the Handle System®, and the architecture of digital objects and repositories. In this paper, we describe the joint effort by Cornell and CNRI to prototype a rich and deployable architecture for interoperable digital objects and repositories. This effort has challenged us to move theories of interoperability closer to practice. The Cornell/CNRI collaboration builds on two existing projects focusing on the development of interoperable digital libraries. Details relating to the technology of these projects are described elsewhere. Both projects were strongly influenced by the fundamental abstractions of repositories and digital objects as articulated by Kahn and Wilensky in A Framework for Distributed Digital Object Services. Furthermore, both programs were influenced by the container architecture described in the Warwick Framework, and by the notions of distributed dynamic objects presented by Lagoze and Daniel in their Distributed Active Relationship work. With these common roots, one would expect that the CNRI and Cornell repositories would be at least theoretically interoperable. However, the actual test would be the extent to which our independently developed repositories were practically interoperable. This paper focuses on the definition of interoperability in the joint Cornell/CNRI work and the set of experiments conducted to formally test it. Our motivation for this work is the eventual deployment of formally tested reference implementations of the repository architecture for experimentation and development by fellow digital library researchers. In Section 2, we summarize the digital object and repository approach that was the focus of our interoperability experiments. In Section 3, we describe the set of experiments that progressively tested interoperability at increasing levels of functionality. In Section 4, we discuss general conclusions, and in Section 5, we give a preview of our future work, including our plans to evolve our experimentation to the point of defining a set of formal metrics for measuring interoperability for repositories and digital objects. This is still a work in progress that is expected to undergo additional refinements during its development.
  2. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.10
    0.095691055 = sum of:
      0.06354813 = product of:
        0.1906444 = sum of:
          0.1906444 = weight(_text_:objects in 1260) [ClassicSimilarity], result of:
            0.1906444 = score(doc=1260,freq=16.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.5813359 = fieldWeight in 1260, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1260)
        0.33333334 = coord(1/3)
      0.032142926 = product of:
        0.06428585 = sum of:
          0.06428585 = weight(_text_:work in 1260) [ClassicSimilarity], result of:
            0.06428585 = score(doc=1260,freq=8.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28386727 = fieldWeight in 1260, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1260)
        0.5 = coord(1/2)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  3. O'Neill, E.T.: ¬The FRBRization of Humphry Clinker : a case study in the application of IFLA's Functional Requirements for Bibliographic Records (FRBR) (2002) 0.09
    0.09343294 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 2433) [ClassicSimilarity], result of:
            0.16340949 = score(doc=2433,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 2433, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2433)
        0.33333334 = coord(1/3)
      0.03896311 = product of:
        0.07792622 = sum of:
          0.07792622 = weight(_text_:work in 2433) [ClassicSimilarity], result of:
            0.07792622 = score(doc=2433,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.3440991 = fieldWeight in 2433, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2433)
        0.5 = coord(1/2)
    
    Abstract
    The goal of OCLC's FRBR projects is to examine issues associated with the conversion of a set of bibliographic records to conform to FRBR requirements (a process referred to as "FRBRization"). The goals of this FRBR project were to: - examine issues associated with creating an entity-relationship model for (i.e., "FRBRizing") a non-trivial work - better understand the relationship between the bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic record is sufficient to reliably identify the FRBR entities - to develop a data set that could be used to evaluate FRBRization algorithms. Using an exemplary work as a case study, lead scientist Ed O'Neill sought to: - better understand the relationship between bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic records is sufficient to reliably identify FRBR entities.
  4. Understanding metadata (2004) 0.08
    0.084792845 = sum of:
      0.051354647 = product of:
        0.15406394 = sum of:
          0.15406394 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
            0.15406394 = score(doc=2686,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.46979034 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.33333334 = coord(1/3)
      0.0334382 = product of:
        0.0668764 = sum of:
          0.0668764 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.0668764 = score(doc=2686,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  5. METS: an overview & tutorial : Metadata Encoding & Transmission Standard (METS) (2001) 0.08
    0.08202091 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 1323) [ClassicSimilarity], result of:
            0.16340949 = score(doc=1323,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 1323, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=1323)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 1323) [ClassicSimilarity], result of:
            0.05510216 = score(doc=1323,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 1323, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=1323)
        0.5 = coord(1/2)
    
    Abstract
    Maintaining a library of digital objects of necessaryy requires maintaining metadata about those objects. The metadata necessary for successful management and use of digital objeets is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. While a library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note that the book was produced using a Ryobi offset press. The Same cannot be said for a digital version of the saure book. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
  6. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.08
    0.08202091 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 981) [ClassicSimilarity], result of:
            0.16340949 = score(doc=981,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 981, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=981)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 981) [ClassicSimilarity], result of:
            0.05510216 = score(doc=981,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 981, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=981)
        0.5 = coord(1/2)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
  7. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.0816638 = product of:
      0.1633276 = sum of:
        0.1633276 = product of:
          0.4899828 = sum of:
            0.4899828 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.4899828 = score(doc=1826,freq=2.0), product of:
                0.5230965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.061700378 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  8. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.08
    0.07855227 = sum of:
      0.055593036 = product of:
        0.1667791 = sum of:
          0.1667791 = weight(_text_:objects in 3372) [ClassicSimilarity], result of:
            0.1667791 = score(doc=3372,freq=6.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.508563 = fieldWeight in 3372, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3372)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 3372) [ClassicSimilarity], result of:
            0.045918465 = score(doc=3372,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 3372, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3372)
        0.5 = coord(1/2)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  9. Priss, U.: Faceted knowledge representation (1999) 0.07
    0.07419373 = sum of:
      0.044935312 = product of:
        0.13480593 = sum of:
          0.13480593 = weight(_text_:objects in 2654) [ClassicSimilarity], result of:
            0.13480593 = score(doc=2654,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41106653 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
        0.33333334 = coord(1/3)
      0.029258423 = product of:
        0.058516845 = sum of:
          0.058516845 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.058516845 = score(doc=2654,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 2654, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2654)
        0.5 = coord(1/2)
    
    Abstract
    Faceted Knowledge Representation provides a formalism for implementing knowledge systems. The basic notions of faceted knowledge representation are "unit", "relation", "facet" and "interpretation". Units are atomic elements and can be abstract elements or refer to external objects in an application. Relations are sequences or matrices of 0 and 1's (binary matrices). Facets are relational structures that combine units and relations. Each facet represents an aspect or viewpoint of a knowledge system. Interpretations are mappings that can be used to translate between different representations. This paper introduces the basic notions of faceted knowledge representation. The formalism is applied here to an abstract modeling of a faceted thesaurus as used in information retrieval.
    Date
    22. 1.2016 17:30:31
  10. Hitchcock, S.; Bergmark, D.; Brody, T.; Gutteridge, C.; Carr, L.; Hall, W.; Lagoze, C.; Harnad, S.: Open citation linking : the way forward (2002) 0.07
    0.07186321 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 1207) [ClassicSimilarity], result of:
            0.09628996 = score(doc=1207,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 1207, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1207)
        0.33333334 = coord(1/3)
      0.039766558 = product of:
        0.079533115 = sum of:
          0.079533115 = weight(_text_:work in 1207) [ClassicSimilarity], result of:
            0.079533115 = score(doc=1207,freq=6.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.35119468 = fieldWeight in 1207, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1207)
        0.5 = coord(1/2)
    
    Abstract
    The speed of scientific communication - the rate of ideas affecting other researchers' ideas - is increasing dramatically. The factor driving this is free, unrestricted access to research papers. Measurements of user activity in mature eprint archives of research papers such as arXiv have shown, for the first time, the degree to which such services support an evolving network of texts commenting on, citing, classifying, abstracting, listing and revising other texts. The Open Citation project has built tools to measure this activity, to build new archives, and has been closely involved with the development of the infrastructure to support open access on which these new services depend. This is the story of the project, intertwined with the concurrent emergence of the Open Archives Initiative (OAI). The paper describes the broad scope of the project's work, showing how it has progressed from early demonstrators of reference linking to produce Citebase, a Web-based citation and impact-ranked search service, and how it has supported the development of the EPrints.org software for building OAI-compliant archives. The work has been underpinned by analysis and experiments on the semantics of documents (digital objects) to determine the features required for formally perfect linking - instantiated as an application programming interface (API) for reference linking - that will enable other applications to build on this work in broader digital library information environments.
  11. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.07
    0.07043342 = sum of:
      0.050951865 = product of:
        0.15285559 = sum of:
          0.15285559 = weight(_text_:objects in 1195) [ClassicSimilarity], result of:
            0.15285559 = score(doc=1195,freq=14.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.4661057 = fieldWeight in 1195, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1195)
        0.33333334 = coord(1/3)
      0.019481555 = product of:
        0.03896311 = sum of:
          0.03896311 = weight(_text_:work in 1195) [ClassicSimilarity], result of:
            0.03896311 = score(doc=1195,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.17204955 = fieldWeight in 1195, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1195)
        0.5 = coord(1/2)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  12. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.07
    0.06631068 = sum of:
      0.050239217 = product of:
        0.15071765 = sum of:
          0.15071765 = weight(_text_:objects in 553) [ClassicSimilarity], result of:
            0.15071765 = score(doc=553,freq=10.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.4595864 = fieldWeight in 553, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.02734375 = fieldNorm(doc=553)
        0.33333334 = coord(1/3)
      0.016071463 = product of:
        0.032142926 = sum of:
          0.032142926 = weight(_text_:work in 553) [ClassicSimilarity], result of:
            0.032142926 = score(doc=553,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.14193363 = fieldWeight in 553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.02734375 = fieldNorm(doc=553)
        0.5 = coord(1/2)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  13. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.07
    0.06533104 = product of:
      0.13066208 = sum of:
        0.13066208 = product of:
          0.39198622 = sum of:
            0.39198622 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.39198622 = score(doc=230,freq=2.0), product of:
                0.5230965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.061700378 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  14. Definition of the CIDOC Conceptual Reference Model (2003) 0.06
    0.064041756 = product of:
      0.12808351 = sum of:
        0.12808351 = sum of:
          0.07792622 = weight(_text_:work in 1652) [ClassicSimilarity], result of:
            0.07792622 = score(doc=1652,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.3440991 = fieldWeight in 1652, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
          0.050157297 = weight(_text_:22 in 1652) [ClassicSimilarity], result of:
            0.050157297 = score(doc=1652,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 1652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1652)
      0.5 = coord(1/2)
    
    Abstract
    This document is the formal definition of the CIDOC Conceptual Reference Model ("CRM"), a formal ontology intended to facilitate the integration, mediation and interchange of heterogeneous cultural heritage information. The CRM is the culmination of more than a decade of standards development work by the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM). Work on the CRM itself began in 1996 under the auspices of the ICOM-CIDOC Documentation Standards Working Group. Since 2000, development of the CRM has been officially delegated by ICOM-CIDOC to the CIDOC CRM Special Interest Group, which collaborates with the ISO working group ISO/TC46/SC4/WG9 to bring the CRM to the form and status of an International Standard.
    Date
    6. 8.2010 14:22:28
  15. Harnett, K.: Machine learning confronts the elephant in the room : a visual prank exposes an Achilles' heel of computer vision systems: Unlike humans, they can't do a double take (2018) 0.06
    0.062288627 = sum of:
      0.03631322 = product of:
        0.108939655 = sum of:
          0.108939655 = weight(_text_:objects in 4449) [ClassicSimilarity], result of:
            0.108939655 = score(doc=4449,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.33219194 = fieldWeight in 4449, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.03125 = fieldNorm(doc=4449)
        0.33333334 = coord(1/3)
      0.025975406 = product of:
        0.051950812 = sum of:
          0.051950812 = weight(_text_:work in 4449) [ClassicSimilarity], result of:
            0.051950812 = score(doc=4449,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2293994 = fieldWeight in 4449, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.03125 = fieldNorm(doc=4449)
        0.5 = coord(1/2)
    
    Abstract
    In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease. "It's a clever and important study that reminds us that 'deep learning' isn't really that deep," said Gary Marcus , a neuroscientist at New York University who was not affiliated with the work. The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we'll want their visual processing to be at least as good as the human eyes they're replacing. It won't be easy. The new work accentuates the sophistication of human vision - and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene - an image of elephant. The elephant's mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen. Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.
  16. Automatic classification research at OCLC (2002) 0.06
    0.06140135 = product of:
      0.1228027 = sum of:
        0.1228027 = sum of:
          0.06428585 = weight(_text_:work in 1563) [ClassicSimilarity], result of:
            0.06428585 = score(doc=1563,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28386727 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
          0.058516845 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
            0.058516845 = score(doc=1563,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
      0.5 = coord(1/2)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  17. BIBFRAME Relationships (2014) 0.06
    0.056238405 = product of:
      0.11247681 = sum of:
        0.11247681 = product of:
          0.22495362 = sum of:
            0.22495362 = weight(_text_:work in 8920) [ClassicSimilarity], result of:
              0.22495362 = score(doc=8920,freq=12.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.9933286 = fieldWeight in 8920, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A BIBFRAME Relationship is a relationship between a BIBFRAME Work or Instance and another BIBFRAME Work or Instance. Thus there are four types of relationships: Work to Work - Work to Instance - Instance to Work - Instance to Instance
  18. Hunter, J.: MetaNet - a metadata term thesaurus to enable semantic interoperability between metadata domains (2001) 0.06
    0.055055887 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 6471) [ClassicSimilarity], result of:
            0.09628996 = score(doc=6471,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 6471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6471)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 6471) [ClassicSimilarity], result of:
            0.045918465 = score(doc=6471,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 6471, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6471)
        0.5 = coord(1/2)
    
    Abstract
    Metadata interoperability is a fundamental requirement for access to information within networked knowledge organization systems. The Harmony international digital library project [1] has developed a common underlying data model (the ABC model) to enable the scalable mapping of metadata descriptions across domains and media types. The ABC model [2] provides a set of basic building blocks for metadata modeling and recognizes the importance of 'events' to describe unambiguously metadata for objects with a complex history. To test and evaluate the interoperability capabilities of this model, we applied it to some real multimedia examples and analysed the results of mapping from the ABC model to various different metadata domains using XSLT [3]. This work revealed serious limitations in the ability of XSLT to support flexible dynamic semantic mapping. To overcome this, we developed MetaNet [4], a metadata term thesaurus which provides the additional semantic knowledge that is non-existent within declarative XML-encoded metadata descriptions. This paper describes MetaNet, its RDF Schema [5] representation and a hybrid mapping approach which combines the structural and syntactic mapping capabilities of XSLT with the semantic knowledge of MetaNet, to enable flexible and dynamic mapping among metadata standards.
  19. Goldberga, A.: Synergy towards shared standards for ALM : Latvian scenario (2008) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 2322) [ClassicSimilarity], result of:
            0.05510216 = score(doc=2322,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 2322, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2322)
          0.050157297 = weight(_text_:22 in 2322) [ClassicSimilarity], result of:
            0.050157297 = score(doc=2322,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 2322, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2322)
      0.5 = coord(1/2)
    
    Abstract
    The report reflects the Latvian scenario in co-operation for standardization of memory institutions. Differences and problems as well as benefits and possible solutions, tasks and activities of Standardization Technical Committee for Archives, Libraries and Museums Work (MABSTK) are analysed. Map of standards as a vision for ALM collaboration in standardization and "Digitizer's Handbook" (translated in English) prepared by the Competence Centre for Digitization of the National Library of Latvia (NLL) are presented. Shortcut to building the National Digital Library Letonica and its digital architecture (with pilot project about the Latvian composer Jazeps Vitols and the digital collection of expresident of Latvia Vaira Vike-Freiberga) reflects the practical co-operation between different players.
    Date
    26.12.2011 13:33:22
  20. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 4649) [ClassicSimilarity], result of:
            0.05510216 = score(doc=4649,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.050157297 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.050157297 = score(doc=4649,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.5 = coord(1/2)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22

Authors

Years

Languages

  • e 285
  • d 88
  • a 2
  • el 2
  • f 2
  • nl 1
  • More… Less…

Types

  • a 195
  • i 11
  • s 11
  • r 9
  • m 6
  • b 3
  • x 3
  • n 2
  • p 2
  • More… Less…

Themes