Search (423 results, page 1 of 22)

  • × type_ss:"el"
  • × type_ss:"a"
  1. Lee, W.-C.: Conflicts of semantic warrants in cataloging practices (2017) 0.11
    0.10819927 = product of:
      0.1442657 = sum of:
        0.008582841 = weight(_text_:information in 3871) [ClassicSimilarity], result of:
          0.008582841 = score(doc=3871,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
        0.1106488 = weight(_text_:standards in 3871) [ClassicSimilarity], result of:
          0.1106488 = score(doc=3871,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 3871, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3871)
        0.025034059 = product of:
          0.050068118 = sum of:
            0.050068118 = weight(_text_:organization in 3871) [ClassicSimilarity], result of:
              0.050068118 = score(doc=3871,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27854347 = fieldWeight in 3871, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3871)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This study presents preliminary themes surfaced from an ongoing ethnographic study. The research question is: how and where do cultures influence the cataloging practices of using U.S. standards to catalog Chinese materials? The author applies warrant as a lens for evaluating knowledge representation systems, and extends the application from examining classificatory decisions to cataloging decisions. Semantic warrant as a conceptual tool allows us to recognize and name the various rationales behind cataloging decisions, grants us explanatory power, and the language to "visualize" and reflect on the conflicting priorities in cataloging practices. Through participatory observation, the author recorded the cataloging practices of two Chinese catalogers working on the same cataloging project. One of the catalogers is U.S. trained, and another cataloger is a professor of Library and Information Science from China, who is also a subject expert and a cataloger of Chinese special collections. The study shows how the catalogers describe Chinese special collections using many U.S. cataloging and classification standards but from different approaches. The author presents particular cases derived from the fieldwork, with an emphasis on the many layers presented by cultures, principles, standards, and practices of different scope, each of which may represent conflicting warrants. From this, it is made clear that the conflicts of warrants influence cataloging practice. We may view the conflicting warrants as an interpretation of the tension between different semantic warrants and the globalization and localization of cataloging standards.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  2. Teets, M.; Murray, P.: Metasearch authentication and access management (2006) 0.11
    0.1053664 = product of:
      0.14048854 = sum of:
        0.01213797 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.01213797 = score(doc=1154,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 1154, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1154)
        0.1106488 = weight(_text_:standards in 1154) [ClassicSimilarity], result of:
          0.1106488 = score(doc=1154,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 1154, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1154)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 1154) [ClassicSimilarity], result of:
              0.035403505 = score(doc=1154,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1154)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Metasearch - also called parallel search, federated search, broadcast search, and cross-database search - has become commonplace in the information community's vocabulary. All speak to a common theme of searching and retrieving from multiple databases, sources, platforms, protocols, and vendors at the point of the user's request. Metasearch services rely on a variety of approaches including open standards (such as NISO's Z39.50 and SRU/SRW), proprietary programming interfaces, and "screen scraping." However, the absence of widely supported standards, best practices, and tools makes the metasearch environment less efficient for the metasearch provider, the content provider, and ultimately the end-user. To spur the development of widely supported standards and best practices, the National Information Standards Organization (NISO) sponsored a Metasearch Initiative in 2003 to enable: * metasearch service providers to offer more effective and responsive services, * content providers to deliver enhanced content and protect their intellectual property, and * libraries to deliver a simple search (a.k.a. "Google") that covers the breadth of their vetted commercial and free resources. The Access Management Task Group was one of three groups chartered by NISO as part of the Metasearch Initiative. The focus of the group was on gathering requirements for Metasearch authentication and access needs, inventorying existing processes, developing a series of formal use cases describing the access needs, recommending best practices given today's processes, and recommending and pursing changes to current solutions to better support metasearch applications. In September 2005, the group issued their final report and recommendation. This article summarizes the group's work and final recommendation.
  3. Escolano Rodrìguez, E.: RDA e ISBD : history of a relationship (2016) 0.08
    0.07701033 = product of:
      0.15402067 = sum of:
        0.13277857 = weight(_text_:standards in 2951) [ClassicSimilarity], result of:
          0.13277857 = score(doc=2951,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 2951, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=2951)
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 2951) [ClassicSimilarity], result of:
              0.042484205 = score(doc=2951,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 2951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2951)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article attempts to clarify the nature of the relationship between the RDA and ISBD standards in order to be able to understand their differences and vinculations, as well as to remove some misinterpretations about this relationship. With this objective, some aspects that can affect their differences, such as types of standards, points of view, scope, origin, policies of the creation and development group or organization in charge that logically justify these differences, are analyzed. These have not presented any obstacles for a correct relationship with the help of the Linked Data technology. In this article, account is also given of the work done of mappings and alignments between the standards in order to contribute properly to the Semantic Web. This knowledge is the one fundamental required for current catalogers to use standards judiciously, knowledgeably and responsibly.
  4. Dobreski, B.: Authority and universalism : conventional values in descriptive catalog codes (2017) 0.08
    0.07589395 = product of:
      0.1517879 = sum of:
        0.117099695 = weight(_text_:standards in 3876) [ClassicSimilarity], result of:
          0.117099695 = score(doc=3876,freq=14.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.5211374 = fieldWeight in 3876, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=3876)
        0.03468821 = product of:
          0.06937642 = sum of:
            0.06937642 = weight(_text_:organization in 3876) [ClassicSimilarity], result of:
              0.06937642 = score(doc=3876,freq=12.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38596115 = fieldWeight in 3876, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3876)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Every standard embodies a particular set of values. Some aspects are privileged while others are masked. Values embedded within knowledge organization standards have special import in that they are further perpetuated by the data they are used to generate. Within libraries, descriptive catalog codes serve as prominent knowledge organization standards, guiding the creation of resource representations. Though the historical and functional aspects of these standards have received significant attention, less focus has been placed on the values associated with such codes. In this study, a critical, historical analysis of ten Anglo-American descriptive catalog codes and surrounding discourse was conducted as an initial step towards uncovering key values associated with this lineage of standards. Two values in particular were found to be highly significant: authority and universalism. Authority is closely tied to notions of power and control, particularly over practice or belief. Increasing control over resources, identities, and viewpoints are all manifestations of the value of authority within descriptive codes. Universalism has guided the widening coverage of descriptive codes in regards to settings and materials, such as the extension of bibliographic standards to non-book resources. Together, authority and universalism represent conventional values focused on facilitating orderly social exchanges. A comparative lack of emphasis on values concerning human welfare and empowerment may be unsurprising, but raises questions concerning the role of human values in knowledge organization standards. Further attention to the values associated with descriptive codes and other knowledge organization standards is important as libraries and other institutions seek to share their resource representation data more widely
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  5. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.07
    0.07240096 = product of:
      0.14480191 = sum of:
        0.1106488 = weight(_text_:standards in 82) [ClassicSimilarity], result of:
          0.1106488 = score(doc=82,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.078125 = fieldNorm(doc=82)
        0.03415312 = product of:
          0.06830624 = sum of:
            0.06830624 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
              0.06830624 = score(doc=82,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38690117 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Inwieweit die für smart cities nötige KI und Standards wie 5G grundsätzlich zu einer umfassenden Militarisierung des Alltags führen.
    Date
    22. 6.2020 19:16:24
  6. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.07
    0.06597382 = product of:
      0.087965086 = sum of:
        0.00849658 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
          0.00849658 = score(doc=1210,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
        0.06707728 = weight(_text_:standards in 1210) [ClassicSimilarity], result of:
          0.06707728 = score(doc=1210,freq=6.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29851896 = fieldWeight in 1210, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
        0.012391226 = product of:
          0.024782453 = sum of:
            0.024782453 = weight(_text_:organization in 1210) [ClassicSimilarity], result of:
              0.024782453 = score(doc=1210,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.13787198 = fieldWeight in 1210, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
  7. Hunter, J.: MetaNet - a metadata term thesaurus to enable semantic interoperability between metadata domains (2001) 0.06
    0.06387309 = product of:
      0.08516412 = sum of:
        0.01213797 = weight(_text_:information in 6471) [ClassicSimilarity], result of:
          0.01213797 = score(doc=6471,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 6471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6471)
        0.0553244 = weight(_text_:standards in 6471) [ClassicSimilarity], result of:
          0.0553244 = score(doc=6471,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 6471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6471)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 6471) [ClassicSimilarity], result of:
              0.035403505 = score(doc=6471,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 6471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6471)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Metadata interoperability is a fundamental requirement for access to information within networked knowledge organization systems. The Harmony international digital library project [1] has developed a common underlying data model (the ABC model) to enable the scalable mapping of metadata descriptions across domains and media types. The ABC model [2] provides a set of basic building blocks for metadata modeling and recognizes the importance of 'events' to describe unambiguously metadata for objects with a complex history. To test and evaluate the interoperability capabilities of this model, we applied it to some real multimedia examples and analysed the results of mapping from the ABC model to various different metadata domains using XSLT [3]. This work revealed serious limitations in the ability of XSLT to support flexible dynamic semantic mapping. To overcome this, we developed MetaNet [4], a metadata term thesaurus which provides the additional semantic knowledge that is non-existent within declarative XML-encoded metadata descriptions. This paper describes MetaNet, its RDF Schema [5] representation and a hybrid mapping approach which combines the structural and syntactic mapping capabilities of XSLT with the semantic knowledge of MetaNet, to enable flexible and dynamic mapping among metadata standards.
    Source
    Journal of digital information. 1(2001) no.8, art.# 42
  8. Schoenbeck, O.; Schröter, M.; Werr, N.: Framework Informationskompetenz in der Hochschulbildung (2021) 0.06
    0.06326494 = product of:
      0.12652989 = sum of:
        0.01699316 = weight(_text_:information in 298) [ClassicSimilarity], result of:
          0.01699316 = score(doc=298,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=298)
        0.10953673 = weight(_text_:standards in 298) [ClassicSimilarity], result of:
          0.10953673 = score(doc=298,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.4874794 = fieldWeight in 298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0546875 = fieldNorm(doc=298)
      0.5 = coord(2/4)
    
    Abstract
    Im Mittelpunkt dieses Beitrags steht das 2016 von der Association of College & Research Libraries (ACRL) veröffentlichte Framework for Information Literacy for Higher Education, dessen Kernideen und Entwicklung aus Vorläufern wie den 2000 von der ACRL publizierten Information Literacy Competency Standards for Higher Education heraus skizziert werden. Die Rezeptionsgeschichte dieser Standards im deutschen Sprachraum wird vor dem Hintergrund der Geschichte ihrer (partiellen) Übersetzung nachgezeichnet und hieraus das Potenzial abgeleitet, das die nun vorliegende vollständige Übersetzung des Framework ins Deutsche für eine zeitgemäße Förderung von Informationskompetenz bietet. Die vielfältigen Herausforderungen einer solchen Übersetzung werden durch Einblicke in die Übersetzungswerkstatt exemplarisch reflektiert.
  9. Jackson, R.: Information Literacy and its relationship to cognitive development and reflective judgment (2008) 0.06
    0.057992067 = product of:
      0.115984134 = sum of:
        0.027465092 = weight(_text_:information in 111) [ClassicSimilarity], result of:
          0.027465092 = score(doc=111,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3103276 = fieldWeight in 111, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=111)
        0.088519044 = weight(_text_:standards in 111) [ClassicSimilarity], result of:
          0.088519044 = score(doc=111,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.39394283 = fieldWeight in 111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0625 = fieldNorm(doc=111)
      0.5 = coord(2/4)
    
    Abstract
    This chapter maps the Association of College and Research Libraries' Information Competency Standards for Higher Education to the cognitive development levels developed by William G. Perry and Patricia King and Karen Kitchener to suggest which competencies are appropriate for which level of cognitive development.
    Series
    Special issue: Information Literacy: One key to education
    Theme
    Information
  10. Schoenbeck, O.; Schröter, M.; Werr, N.: Making of oder Lost in translation? : Das Framework for Information Literacy for Higher Education - Herausforderungen bei der Übersetzung ins Deutsche und der bibliothekarischen Anwendung (2021) 0.06
    0.055863865 = product of:
      0.11172773 = sum of:
        0.017839102 = weight(_text_:information in 297) [ClassicSimilarity], result of:
          0.017839102 = score(doc=297,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 297, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=297)
        0.093888626 = weight(_text_:standards in 297) [ClassicSimilarity], result of:
          0.093888626 = score(doc=297,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.41783947 = fieldWeight in 297, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=297)
      0.5 = coord(2/4)
    
    Abstract
    Im Mittelpunkt dieses Beitrags steht das 2016 von der Association of College & Research Libraries (ACRL) veröffentlichte Framework for Information Literacy for Higher Education, dessen Kernideen und Entwicklung aus Vorläufern wie den 2000 von der ACRL publizierten Information Literacy Competency Standards for Higher Education heraus skizziert werden. Die Rezeptionsgeschichte dieser Standards im deutschen Sprachraum wird vor dem Hintergrund der Geschichte ihrer (partiellen) Übersetzung nachgezeichnet und hieraus das Potenzial abgeleitet, das die nun vorliegende vollständige Übersetzung des Framework ins Deutsche für eine zeitgemäße Förderung von Informationskompetenz bietet. Die vielfältigen Herausforderungen einer solchen Übersetzung werden durch Einblicke in die Übersetzungswerkstatt exemplarisch reflektiert.
  11. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.05
    0.051098473 = product of:
      0.0681313 = sum of:
        0.009710376 = weight(_text_:information in 311) [ClassicSimilarity], result of:
          0.009710376 = score(doc=311,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.044259522 = weight(_text_:standards in 311) [ClassicSimilarity], result of:
          0.044259522 = score(doc=311,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
        0.014161401 = product of:
          0.028322803 = sum of:
            0.028322803 = weight(_text_:organization in 311) [ClassicSimilarity], result of:
              0.028322803 = score(doc=311,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15756798 = fieldWeight in 311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=311)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
    Methodology Based on information from research papers, more research was done on SKOS and examples of SKOS and shared faceted classifications in the Semantic Web and about SKOS and how to express SKOS in RDF/XML. Once confident with these ideas, the author used a faceted taxonomy created in a Vocabulary Design class and encoded it using SKOS. Instead of writing RDF in a program such as Notepad, a thesaurus tool was used to create the taxonomy according to SKOS standards and then export the thesaurus in RDF/XML format. These processes and tools are then analyzed. Results The initial statement of the problem was simply an extension of the survey paper done earlier in this class. To continue on with the research, more research was done into SKOS - a standard for expressing thesauri, taxonomies and faceted classifications so they can be shared on the semantic web.
  12. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.05
    0.04765854 = product of:
      0.09531708 = sum of:
        0.07824052 = weight(_text_:standards in 4553) [ClassicSimilarity], result of:
          0.07824052 = score(doc=4553,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34819958 = fieldWeight in 4553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4553,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  13. Lynch, C.A.: ¬The Z39.50 information retrieval standard : part I: a strategic view of its past, present and future (1997) 0.05
    0.04580467 = product of:
      0.09160934 = sum of:
        0.01029941 = weight(_text_:information in 1262) [ClassicSimilarity], result of:
          0.01029941 = score(doc=1262,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 1262, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1262)
        0.08130993 = weight(_text_:standards in 1262) [ClassicSimilarity], result of:
          0.08130993 = score(doc=1262,freq=12.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.3618596 = fieldWeight in 1262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1262)
      0.5 = coord(2/4)
    
    Abstract
    The Z39.50 standard for information retrieval is important from a number of perspectives. While still not widely known within the computer networking community, it is a mature standard that represents the culmination of two decades of thinking and debate about how information retrieval functions can be modeled, standardized, and implemented in a distributed systems environment. And - importantly -- it has been tested through substantial deployment experience. Z39.50 is one of the few examples we have to date of a protocol that actually goes beyond codifying mechanism and moves into the area of standardizing shared semantic knowledge. The extent to which this should be a goal of the protocol has been an ongoing source of controversy and tension within the developer community, and differing views on this issue can be seen both in the standard itself and the way that it is used in practice. Given the growing emphasis on issues such as "semantic interoperability" as part of the research agenda for digital libraries (see Clifford A. Lynch and Hector Garcia-Molina. Interoperability, Scaling, and the Digital Libraries Research Agenda, Report on the May 18-19, 1995 IITA Libraries Workshop, <http://www- diglib.stanford.edu/diglib/pub/reports/iita-dlw/main.html>), the insights gained by the Z39.50 community into the complex interactions among various definitions of semantics and interoperability are particularly relevant. The development process for the Z39.50 standard is also of interest in its own right. Its history, dating back to the 1970s, spans a period that saw the eclipse of formal standards-making agencies by groups such as the Internet Engineering Task Force (IETF) and informal standards development consortia. Moreover, in order to achieve meaningful implementation, Z39.50 had to move beyond its origins in the OSI debacle of the 1980s. Z39.50 has also been, to some extent, a victim of its own success -- or at least promise. Recent versions of the standard are highly extensible, and the consensus process of standards development has made it hospitable to an ever-growing set of new communities and requirements. As this process of extension has proceeded, it has become ever less clear what the appropriate scope and boundaries of the protocol should be, and what expectations one should have of practical interoperability among implementations of the standard. Z39.50 thus offers an excellent case study of the problems involved in managing the evolution of a standard over time. It may well offer useful lessons for the future of other standards such as HTTP and HTML, which seem to be facing some of the same issues.
    This paper, which will appear in two parts, starting with this issue of D-Lib, looks at several strategic issues surrounding Z39.50. After a relatively brief overview of the function and history of the protocol, I will examine some of the competing visions of the protocol's role, with emphasis on issues of interoperability and the incorporation of semantics. The second installment of the paper will look at questions related to the management of the standard and the standards development process, with emphasis on the scope of the protocol and how that relates back again to interoperability questions. The paper concludes with a discussion of the adoption and deployment of the standard, its relationship to other standards, and some speculations on future directions for the protocol. This paper is not intended to be a tutorial on the details of how current or past versions of Z39.50 work. These technical details are covered not only in the standard itself (which can admittedly be rather difficult reading) but also in an array of tutorial and review papers (see <http://lcweb.loc.gov/z3950/agency> for bibliographies and pointers to on-line information on Z39.50). Instead, the paper's focus is on how and why Z39.50 developed the way it did, and the conceptual debates that have influenced its evolution and use. While a detailed technical knowledge of the operation of Z39.50 is certainly helpful, it should not be necessary in order to follow most of the material here. Some disclaimers are in order. I have been actively involved in the development of Z39.50 since the early 1980s and have been a participant -- and on occasion, even an instigator -- of some of the activities described here. This paper is an attempt to make a critical assessment of the current state of Z39.50 and a review of its development with the full benefit of hindsight. It recounts a number of debates that occurred within the developer community over the past years. In many of these, I advocated specific positions or approaches, sometimes successfully and sometimes unsuccessfully. What is presented here is one person's perspective - mine --, which is sometimes at odds with the current consensus with the developer community; I've tried to represent opposing views fairly, and to differentiate my opinions from fact or consensus. However, others will undoubtedly disagree with many of the comments here.
  14. Borgman, C.L.: Multi-media, multi-cultural, and multi-lingual digital libraries : or how do we exchange data In 400 languages? (1997) 0.04
    0.043930154 = product of:
      0.08786031 = sum of:
        0.010406143 = weight(_text_:information in 1263) [ClassicSimilarity], result of:
          0.010406143 = score(doc=1263,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.11757882 = fieldWeight in 1263, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1263)
        0.077454165 = weight(_text_:standards in 1263) [ClassicSimilarity], result of:
          0.077454165 = score(doc=1263,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34469998 = fieldWeight in 1263, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1263)
      0.5 = coord(2/4)
    
    Abstract
    The Internet would not be very useful if communication were limited to textual exchanges between speakers of English located in the United States. Rather, its value lies in its ability to enable people from multiple nations, speaking multiple languages, to employ multiple media in interacting with each other. While computer networks broke through national boundaries long ago, they remain much more effective for textual communication than for exchanges of sound, images, or mixed media -- and more effective for communication in English than for exchanges in most other languages, much less interactions involving multiple languages. Supporting searching and display in multiple languages is an increasingly important issue for all digital libraries accessible on the Internet. Even if a digital library contains materials in only one language, the content needs to be searchable and displayable on computers in countries speaking other languages. We need to exchange data between digital libraries, whether in a single language or in multiple languages. Data exchanges may be large batch updates or interactive hyperlinks. In any of these cases, character sets must be represented in a consistent manner if exchanges are to succeed. Issues of interoperability, portability, and data exchange related to multi-lingual character sets have received surprisingly little attention in the digital library community or in discussions of standards for information infrastructure, except in Europe. The landmark collection of papers on Standards Policy for Information Infrastructure, for example, contains no discussion of multi-lingual issues except for a passing reference to the Unicode standard. The goal of this short essay is to draw attention to the multi-lingual issues involved in designing digital libraries accessible on the Internet. Many of the multi-lingual design issues parallel those of multi-media digital libraries, a topic more familiar to most readers of D-Lib Magazine. This essay draws examples from multi-media DLs to illustrate some of the urgent design challenges in creating a globally distributed network serving people who speak many languages other than English. First we introduce some general issues of medium, culture, and language, then discuss the design challenges in the transition from local to global systems, lastly addressing technical matters. The technical issues involve the choice of character sets to represent languages, similar to the choices made in representing images or sound. However, the scale of the language problem is far greater. Standards for multi-media representation are being adopted fairly rapidly, in parallel with the availability of multi-media content in electronic form. By contrast, we have hundreds (and sometimes thousands) of years worth of textual materials in hundreds of languages, created long before data encoding standards existed. Textual content from past and present is being encoded in language and application-specific representations that are difficult to exchange without losing data -- if they exchange at all. We illustrate the multi-language DL challenge with examples drawn from the research library community, which typically handles collections of materials in 400 or so languages. These are problems faced not only by developers of digital libraries, but by those who develop and manage any communication technology that crosses national or linguistic boundaries.
    Theme
    Information Gateway
  15. Miller, K.; Matthews, B.: Having the right connections : the LIMBER project (2001) 0.04
    0.043494053 = product of:
      0.08698811 = sum of:
        0.02059882 = weight(_text_:information in 5933) [ClassicSimilarity], result of:
          0.02059882 = score(doc=5933,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 5933, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5933)
        0.066389285 = weight(_text_:standards in 5933) [ClassicSimilarity], result of:
          0.066389285 = score(doc=5933,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 5933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=5933)
      0.5 = coord(2/4)
    
    Abstract
    As with any journey, you have to make the right connections if you want to reach your desired destination. The goal in the LIMBER project is to facilitate cross-European data analysis independent of domain, resource, language and vocabulary. The paper describes the expertise, associations, standards and architecture underlying the project deliverables designed to achieve the project's ambitious aims. - Limber (Language Independent Metadata Browsing of European Resources) is an EU (European Union) IST (Information Societies Technology) funded project that seeks to address the problems of linguistic and discipline boundaries, which, within a more integrated European environment, are becoming increasingly important. Decision-makers, researchers and journalists need to be provided with a broader, comparative picture of society across the continent; with the social science information often required to be correlated with information from domains such as environmental science, geography and health. This cross-discipline interoperability will be provided via a uniform metadata description. In addition, the provision of multilingual user interfaces and the controlled vocabulary of a multi-lingual thesaurus will make these datasets globally accessible in a range of end-user natural languages
    Source
    Journal of digital information. 1(2001) no.8
  16. Paskin, N.: DOI: a 2003 progress report (2003) 0.04
    0.042975374 = product of:
      0.08595075 = sum of:
        0.00849658 = weight(_text_:information in 1203) [ClassicSimilarity], result of:
          0.00849658 = score(doc=1203,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 1203, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1203)
        0.077454165 = weight(_text_:standards in 1203) [ClassicSimilarity], result of:
          0.077454165 = score(doc=1203,freq=8.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.34469998 = fieldWeight in 1203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1203)
      0.5 = coord(2/4)
    
    Abstract
    The International DOI Foundation (IDF) recently published the third edition of its DOI Handbook, which sets the scene for DOI's expansion into much wider applications. Edition 3 is not simply an updated user guide. A great deal has happened in the underlying technologies and in the practical deployment and development of DOIs (Digital Object Identifiers) since the last edition was published a year ago. Much of the program of technical work foreseen at the inception of DOIs has now been completed. The initial simple implementation of DOI as a persistent name linked to redirection continues to grow, with approaching ten million DOIs assigned from several hundred organisations through a number of Registration Agencies in USA, Europe, and Australasia, supporting large scale business uses. Implementations of more sophisticated applications (offering associated services) have been developing well but on a smaller scale: a framework for building these has been completed as part of the latest release and promises to stimulate a new wave of growth. From its original starting point in text publishing, there has been gradual embrace by a number of communities: these include national libraries (a consortium of national libraries recently joined the IDF); government documentation (with the appointment of TSO The Stationery Office in the UK as a DOI agency and the announced intention of the EC Office of Publications to use DOIs); non-English language markets (France, Germany, Spain, Italy, Korea). However implementations in non-text sectors have been far slower to develop, though several are now under discussion. The DOI community can point to several significant achievements over the past few years: * A practical successful open implementation of naming objects, treating content as information objects, not simply packets of bits; * The IDF's role in co-sponsoring, championing, and now implementing the <indecs>T framework as a semantic tool for structured metadata - an essential step for treating content as information in Semantic-Web-like applications; * A template for building advanced applications, connecting resolution and metadata technologies, and offering hooks to web services and similar applications; * The development of a policy framework that allows multiple communities autonomy; * The practical implementation of DOIs with emerging related standards such as the OpenURL framework in contextual linking.
    A number of issues remain to be solved. In the main these are no longer technical in nature, but more concerned with perception and outreach to other communities. They include: correctly positioning the DOI in the standards community as a practical implementation (based on standards, but more than standards); offering the benefits of DOI to other communities working in related identifier development whilst allowing them to remain largely autonomous; demonstrating how DOIs can complement, rather than compete with, other activities; and ensuring that a sustainable long-term infrastructure for any application (commercial and non-commercial alike) is in place. Persistent, actionable identifiers with a fully managed sustainable infrastructure are not appropriate for every activity; but they are suitable for many, and where they are used, the key to providing a successful and widely adopted system is encouraging economy of scale (and so, where possible, convergence with other related efforts), flexibility of use, and a low barrier to use. DOI is well on the way to providing this, but not yet guaranteed of success without the further effort that is now being applied.
  17. Waard, A. de; Fluit, C.; Harmelen, F. van: Drug Ontology Project for Elsevier (DOPE) (2007) 0.04
    0.042682633 = product of:
      0.085365266 = sum of:
        0.02277285 = weight(_text_:information in 758) [ClassicSimilarity], result of:
          0.02277285 = score(doc=758,freq=22.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.25731003 = fieldWeight in 758, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
        0.06259242 = weight(_text_:standards in 758) [ClassicSimilarity], result of:
          0.06259242 = score(doc=758,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=758)
      0.5 = coord(2/4)
    
    Abstract
    Innovative research institutes rely on the availability of complete and accurate information about new research and development, and it is the business of information providers such as Elsevier to provide the required information in a cost-effective way. It is very likely that the semantic web will make an important contribution to this effort, since it facilitates access to an unprecedented quantity of data. However, with the unremitting growth of scientific information, integrating access to all this information remains a significant problem, not least because of the heterogeneity of the information sources involved - sources which may use different syntactic standards (syntactic heterogeneity), organize information in very different ways (structural heterogeneity) and even use different terminologies to refer to the same information (semantic heterogeneity). The ability to address these different kinds of heterogeneity is the key to integrated access. Thesauri have already proven to be a core technology to effective information access as they provide controlled vocabularies for indexing information, and thereby help to overcome some of the problems of free-text search by relating and grouping relevant terms in a specific domain. However, currently there is no open architecture which supports the use of these thesauri for querying other data sources. For example, when we move from the centralized and controlled use of EMTREE within EMBASE.com to a distributed setting, it becomes crucial to improve access to the thesaurus by means of a standardized representation using open data standards that allow for semantic qualifications. In general, mental models and keywords for accessing data diverge between subject areas and communities, and so many different ontologies have been developed. An ideal architecture must therefore support the disclosure of distributed and heterogeneous data sources through different ontologies. The aim of the DOPE project (Drug Ontology Project for Elsevier) is to investigate the possibility of providing access to multiple information sources in the area of life science through a single interface.
  18. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.04
    0.036151398 = product of:
      0.072302796 = sum of:
        0.009710376 = weight(_text_:information in 1256) [ClassicSimilarity], result of:
          0.009710376 = score(doc=1256,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 1256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
        0.06259242 = weight(_text_:standards in 1256) [ClassicSimilarity], result of:
          0.06259242 = score(doc=1256,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 1256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
      0.5 = coord(2/4)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
  19. Bartolo, L.M.; Lowe, C.S.; Sadoway, D.R.; Powell, A.C.; Glotzer, S.C.: NSDL MatDL : exploring digital library roles (2005) 0.04
    0.03509516 = product of:
      0.07019032 = sum of:
        0.014865918 = weight(_text_:information in 1181) [ClassicSimilarity], result of:
          0.014865918 = score(doc=1181,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16796975 = fieldWeight in 1181, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1181)
        0.0553244 = weight(_text_:standards in 1181) [ClassicSimilarity], result of:
          0.0553244 = score(doc=1181,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 1181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1181)
      0.5 = coord(2/4)
    
    Abstract
    A primary goal of the NSDL Materials Digital Library (MatDL) is to bring materials science research and education closer together. MatDL is exploring the various roles digital libraries can serve in the materials science community including: 1) supporting a virtual lab, 2) developing markup language applications, and 3) building tools for metadata capture. MatDL is being integrated into an MIT virtual laboratory experience. Early student self-assessment survey results expressed positive opinions of the potential value of MatDL in supporting a virtual lab and in accomplishing additional educational objectives. A separate survey suggested that the effectiveness of a virtual lab may approach that of a physical lab on some laboratory learning objectives. MatDL is collaboratively developing a materials property grapher (KSU and MIT) and a submission tool (KSU and U-M). MatML is an extensible markup language for exchanging materials information developed by materials data experts in industry, government, standards organizations, and professional societies. The web-based MatML grapher allows students to compare selected materials properties across approximately 80 MatML-tagged materials. The MatML grapher adds value in this educational context by allowing students to utilize real property data to make optimal material selection decisions. The submission tool has been integrated into the regular workflow of U-M students and researchers generating nanostructure images. It prompts users for domain-specific information, automatically generating and attaching keywords and editable descriptions.
    Theme
    Information Gateway
  20. Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003) 0.03
    0.034729347 = product of:
      0.06945869 = sum of:
        0.006866273 = weight(_text_:information in 1199) [ClassicSimilarity], result of:
          0.006866273 = score(doc=1199,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
        0.06259242 = weight(_text_:standards in 1199) [ClassicSimilarity], result of:
          0.06259242 = score(doc=1199,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 1199, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
      0.5 = coord(2/4)
    
    Abstract
    On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.

Years

Languages

  • e 293
  • d 120
  • i 4
  • f 2
  • a 1
  • es 1
  • sp 1
  • More… Less…