Search (325 results, page 1 of 17)

  • × theme_ss:"Metadaten"
  1. Minas, M.; Shklar, L.: Visualizing information repositories on the World-Wide Web (1996) 0.05
    0.047677718 = product of:
      0.12714058 = sum of:
        0.045056276 = weight(_text_:wide in 6267) [ClassicSimilarity], result of:
          0.045056276 = score(doc=6267,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 6267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6267)
        0.042337947 = weight(_text_:web in 6267) [ClassicSimilarity], result of:
          0.042337947 = score(doc=6267,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.43716836 = fieldWeight in 6267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6267)
        0.039746363 = weight(_text_:data in 6267) [ClassicSimilarity], result of:
          0.039746363 = score(doc=6267,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.42357713 = fieldWeight in 6267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6267)
      0.375 = coord(3/8)
    
    Abstract
    The main objective of the proposed high-level 'Visual Repository Definition Language' is to anbale advanced Web presentation of large amounts of exisitng heterogeneous information. Statements of the language serve to describe the desired structure of information repositories, which are composed of metadata entities encapsulating the original data. Such approach helps to to avoid the usual relocation and restructuring of data that occurs when providing Web access to it. The language has been designed to be useful even for inexperienced programmers. Its applicability is demonstrated by a real example, creating a repository of judicial opinions from publicly available raw data
  2. Liechti, O.; Sifer, M.J.; Ichikawa, T.: Structured graph format : XML metadata for describing Web site structure (1998) 0.04
    0.04462623 = product of:
      0.11900328 = sum of:
        0.045056276 = weight(_text_:wide in 3597) [ClassicSimilarity], result of:
          0.045056276 = score(doc=3597,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 3597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3597)
        0.0598749 = weight(_text_:web in 3597) [ClassicSimilarity], result of:
          0.0598749 = score(doc=3597,freq=12.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.6182494 = fieldWeight in 3597, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3597)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 3597) [ClassicSimilarity], result of:
              0.028144216 = score(doc=3597,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 3597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3597)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    To improve searching, filtering and processing of information on the Web, a common effort is made in the direction of metadata, defined as machine understandable information about Web resources or other things. In particular, the eXtensible Markup Language (XML) aims at providing a common syntax to emerging metadata formats. Proposes the Structured Graph Format (SGF) an XML compliant markup language based on structured graphs, for capturing Web sites' structure. Presents SGMapper, a client-site tool, which aims to facilitate navigation in large Web sites by generating highly interactive site maps using SGF metadata
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  3. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.04
    0.04266991 = product of:
      0.11378643 = sum of:
        0.045056276 = weight(_text_:wide in 3383) [ClassicSimilarity], result of:
          0.045056276 = score(doc=3383,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 3383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3383)
        0.05465805 = weight(_text_:web in 3383) [ClassicSimilarity], result of:
          0.05465805 = score(doc=3383,freq=10.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.5643819 = fieldWeight in 3383, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3383)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
              0.028144216 = score(doc=3383,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 3383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  4. Greenberg, J.: Metadata and the World Wide Web (2002) 0.04
    0.040400896 = product of:
      0.10773572 = sum of:
        0.045513712 = weight(_text_:wide in 4264) [ClassicSimilarity], result of:
          0.045513712 = score(doc=4264,freq=4.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.34615302 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
        0.039041467 = weight(_text_:web in 4264) [ClassicSimilarity], result of:
          0.039041467 = score(doc=4264,freq=10.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.40312994 = fieldWeight in 4264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
        0.023180548 = weight(_text_:data in 4264) [ClassicSimilarity], result of:
          0.023180548 = score(doc=4264,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24703519 = fieldWeight in 4264, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4264)
      0.375 = coord(3/8)
    
    Abstract
    Metadata is of paramount importance for persons, organizations, and endeavors of every dimension that are increasingly turning to the World Wide Web (hereafter referred to as the Web) as a chief conduit for accessing and disseminating information. This is evidenced by the development and implementation of metadata schemas supporting projects ranging from restricted corporate intranets, data warehouses, and consumer-oriented electronic commerce enterprises to freely accessible digital libraries, educational initiatives, virtual museums, and other public Web sites. Today's metadata activities are unprecedented because they extend beyond the traditional library environment in an effort to deal with the Web's exponential growth. This article considers metadata in today's Web environment. The article defines metadata, examines the relationship between metadata and cataloging, provides definitions for key metadata vocabulary terms, and explores the topic of metadata generation. Metadata is an extensive and expanding subject that is prevalent in many environments. For practical reasons, this article has elected to concentrate an the information resource domain, which is defined by electronic textual documents, graphical images, archival materials, museum artifacts, and other objects found in both digital and physical information centers (e.g., libraries, museums, record centers, and archives). To show the extent and larger application of metadata, several examples are also drawn from the data warehouse, electronic commerce, open source, and medical communities.
  5. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.04
    0.038049873 = product of:
      0.10146633 = sum of:
        0.045056276 = weight(_text_:wide in 2673) [ClassicSimilarity], result of:
          0.045056276 = score(doc=2673,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.042337947 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.042337947 = score(doc=2673,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.028144216 = score(doc=2673,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  6. Waugh, A.: Specifying metadata standards for metadata tool configuration (1998) 0.04
    0.035816662 = product of:
      0.095511094 = sum of:
        0.051492885 = weight(_text_:wide in 3596) [ClassicSimilarity], result of:
          0.051492885 = score(doc=3596,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.3916274 = fieldWeight in 3596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=3596)
        0.0279358 = weight(_text_:web in 3596) [ClassicSimilarity], result of:
          0.0279358 = score(doc=3596,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.2884563 = fieldWeight in 3596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3596)
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 3596) [ClassicSimilarity], result of:
              0.03216482 = score(doc=3596,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 3596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3596)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  7. Managing metadata in web-scale discovery systems (2016) 0.03
    0.034530215 = product of:
      0.09208058 = sum of:
        0.025746442 = weight(_text_:wide in 3336) [ClassicSimilarity], result of:
          0.025746442 = score(doc=3336,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.1958137 = fieldWeight in 3336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.03421423 = weight(_text_:web in 3336) [ClassicSimilarity], result of:
          0.03421423 = score(doc=3336,freq=12.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.35328537 = fieldWeight in 3336, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.032119907 = weight(_text_:data in 3336) [ClassicSimilarity], result of:
          0.032119907 = score(doc=3336,freq=12.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.342302 = fieldWeight in 3336, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
      0.375 = coord(3/8)
    
    Abstract
    This book shows you how to harness the power of linked data and web-scale discovery systems to manage and link widely varied content across your library collection. Libraries are increasingly using web-scale discovery systems to help clients find a wide assortment of library materials, including books, journal articles, special collections, archival collections, videos, music and open access collections. Depending on the library material catalogued, the discovery system might need to negotiate different metadata standards, such as AACR, RDA, RAD, FOAF, VRA Core, METS, MODS, RDF and more. In Managing Metadata in Web-Scale Discovery Systems, editor Louise Spiteri and a range of international experts show you how to: * maximize the effectiveness of web-scale discovery systems * provide a smooth and seamless discovery experience to your users * help users conduct searches that yield relevant results * manage the sheer volume of items to which you can provide access, so your users can actually find what they need * maintain shared records that reflect the needs, languages, and identities of culturally and ethnically varied communities * manage metadata both within, across, and outside, library discovery tools by converting your library metadata to linked open data that all systems can access * manage user generated metadata from external services such as Goodreads and LibraryThing * mine user generated metadata to better serve your users in areas such as collection development or readers' advisory. The book will be essential reading for cataloguers, technical services and systems librarians and library and information science students studying modules on metadata, cataloguing, systems design, data management, and digital libraries. The book will also be of interest to those managing metadata in archives, museums and other cultural heritage institutions.
    Content
    1. Introduction: the landscape of web-scale discovery - Louise Spiteri 2. Sharing metadata across discovery systems - Marshall Breeding, Angela Kroeger and Heather Moulaison Sandy 3. Managing linked open data across discovery systems - Ali Shiri and Danoosh Davoodi 4. Redefining library resources in discovery systems - Christine DeZelar-Tiedman 5. Managing volume in discovery systems - Aaron Tay 6. Managing outsourced metadata in discovery systems - Laurel Tarulli 7. Managing user-generated metadata in discovery systems - Louise Spiteri
    LCSH
    Linked data
    Subject
    Linked data
  8. Cantara, L.: METS: the metadata encoding and transmission standard (2005) 0.03
    0.032969773 = product of:
      0.0879194 = sum of:
        0.038619664 = weight(_text_:wide in 5727) [ClassicSimilarity], result of:
          0.038619664 = score(doc=5727,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 5727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
        0.029630389 = weight(_text_:web in 5727) [ClassicSimilarity], result of:
          0.029630389 = score(doc=5727,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.3059541 = fieldWeight in 5727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
        0.019669347 = weight(_text_:data in 5727) [ClassicSimilarity], result of:
          0.019669347 = score(doc=5727,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 5727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5727)
      0.375 = coord(3/8)
    
    Abstract
    The Metadata Encoding and Transmission Standard (METS) is a data communication standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML Schema Language of the World Wide Web Consortium. An initiative of the Digital Library Federation, METS is under development by an international editorial board and is maintained in the Network Development and MARC Standards Office of the Library of Congress. Designed in conformance with the Open Archival Information System (OAIS) Reference Model, a METS document encapsulates digital objects and metadata as Information Packages for transmitting and/or exchanging digital objects to and from digital repositories, disseminating digital objects via the Web, and archiving digital objects for long-term preservation and access. This paper presents an introduction to the METS standard and through illustrated examples, demonstrates how to build a METS document.
  9. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.03
    0.032686703 = product of:
      0.087164536 = sum of:
        0.05543339 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
          0.05543339 = score(doc=2556,freq=14.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.57238775 = fieldWeight in 2556, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.019669347 = weight(_text_:data in 2556) [ClassicSimilarity], result of:
          0.019669347 = score(doc=2556,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2096163 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.012061807 = product of:
          0.024123615 = sum of:
            0.024123615 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.024123615 = score(doc=2556,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Theme
    Semantic Web
  10. Heery, R.: Information gateways : collaboration and content (2000) 0.03
    0.03133958 = product of:
      0.08357221 = sum of:
        0.045056276 = weight(_text_:wide in 4866) [ClassicSimilarity], result of:
          0.045056276 = score(doc=4866,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.024443826 = weight(_text_:web in 4866) [ClassicSimilarity], result of:
          0.024443826 = score(doc=4866,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25239927 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.028144216 = score(doc=4866,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
  11. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.030434225 = product of:
      0.06086845 = sum of:
        0.025746442 = weight(_text_:wide in 1236) [ClassicSimilarity], result of:
          0.025746442 = score(doc=1236,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.1958137 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.0139679 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.0139679 = score(doc=1236,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.013112898 = weight(_text_:data in 1236) [ClassicSimilarity], result of:
          0.013112898 = score(doc=1236,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.1397442 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.008041205 = product of:
          0.01608241 = sum of:
            0.01608241 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.01608241 = score(doc=1236,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  12. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.03
    0.026613263 = product of:
      0.0709687 = sum of:
        0.024443826 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.024443826 = score(doc=3283,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.03245277 = weight(_text_:data in 3283) [ClassicSimilarity], result of:
          0.03245277 = score(doc=3283,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.34584928 = fieldWeight in 3283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.028144216 = score(doc=3283,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
    Theme
    Semantic Web
  13. Dempsey, L.: Metadata (1997) 0.03
    0.02598612 = product of:
      0.10394448 = sum of:
        0.051492885 = weight(_text_:wide in 46) [ClassicSimilarity], result of:
          0.051492885 = score(doc=46,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.3916274 = fieldWeight in 46, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=46)
        0.052451592 = weight(_text_:data in 46) [ClassicSimilarity], result of:
          0.052451592 = score(doc=46,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.5589768 = fieldWeight in 46, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=46)
      0.25 = coord(2/8)
    
    Abstract
    The term 'metadata' is becoming commonly used to refer to a variety of types of data which describe other data. A familiar example is bibliographic data, which describes a book or a serial article. Suggests that a routine definiton might be: 'metadata is data which describes attributes of a resource'. Gives some examples before looking at the Dublic Core, a simple response to the challenge of describing a wide range of network resources
  14. Dempsey, L.: Metadata (1997) 0.03
    0.02598612 = product of:
      0.10394448 = sum of:
        0.051492885 = weight(_text_:wide in 107) [ClassicSimilarity], result of:
          0.051492885 = score(doc=107,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.3916274 = fieldWeight in 107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=107)
        0.052451592 = weight(_text_:data in 107) [ClassicSimilarity], result of:
          0.052451592 = score(doc=107,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.5589768 = fieldWeight in 107, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=107)
      0.25 = coord(2/8)
    
    Abstract
    The term 'metadata' is becoming commonly used to refer to a variety of types of data which describe other data. A familiar example is bibliographic data, which describes a book or a serial article. Suggests that a rountine definition might be: 'Metadata is data which describes attributes of a resource'. Provides examples to expand on this before looking at the Dublin Core, a simple set of elements for describing a wide range of network resources
  15. Chopey, M.: Planning and implementing a metadata-driven digital repository (2005) 0.03
    0.025189433 = product of:
      0.10075773 = sum of:
        0.07282194 = weight(_text_:wide in 5729) [ClassicSimilarity], result of:
          0.07282194 = score(doc=5729,freq=4.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.5538448 = fieldWeight in 5729, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
        0.0279358 = weight(_text_:web in 5729) [ClassicSimilarity], result of:
          0.0279358 = score(doc=5729,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.2884563 = fieldWeight in 5729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5729)
      0.25 = coord(2/8)
    
    Abstract
    Metadata is used to organize and control a wide range of different types of information object collections, most of which are accessed via the World Wide Web. This chapter presents a brief introduction to the purpose of metadata and how it has developed, and an overview of the steps to be taken and the functional expertise required in planning for and implementing the creation, storage, and use of metadata for resource discovery in a local repository of information objects.
  16. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.02
    0.024714718 = product of:
      0.06590591 = sum of:
        0.022756856 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.022756856 = score(doc=3173,freq=4.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.028953929 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.028953929 = score(doc=3173,freq=22.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.014195128 = weight(_text_:data in 3173) [ClassicSimilarity], result of:
          0.014195128 = score(doc=3173,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.15127754 = fieldWeight in 3173, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.375 = coord(3/8)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  17. Niederée, C.: Metadaten als Bausteine des Semantic Web (2003) 0.02
    0.023513263 = product of:
      0.09405305 = sum of:
        0.038619664 = weight(_text_:wide in 1761) [ClassicSimilarity], result of:
          0.038619664 = score(doc=1761,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 1761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1761)
        0.05543339 = weight(_text_:web in 1761) [ClassicSimilarity], result of:
          0.05543339 = score(doc=1761,freq=14.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.57238775 = fieldWeight in 1761, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1761)
      0.25 = coord(2/8)
    
    Abstract
    Das »Semantic Web« bildet einen der wichtigsten, aktuellen Trends in der Weiterentwicklung des World Wide Web. Ehrgeizige Vision dieser nächsten Generation des WWW ist es, durch semantische Anreicherung von Information eine neue Qualität in der Bereitstellung von Inhalten und Diensten zu erreichen und vollständig neue Anwendungsmöglichkeiten für das Web zu eröffnen. Wichtige Ziele der Entwicklung des Semantic Web sind dabei die verbesserte Unterstützung von Kooperation zwischen Menschen und Computern und die intelligente Assistenz bei der Durchführung von Aufgaben in kooperativen verteilten Informationsumgebungen. Schlüssel zur Erreichung dieser Ziele sind die Anreicherung von Daten im Web mit Metadaten, welche diese Daten in einen semantischen Kontext einbetten. Diese Kontextinformation wird durch Software-Anwendungen interpretiert und zur Informationsfilterung, Verfeinerung von Anfragen und zur Bereitstellung intelligenter Assistenten verwendet. Eine große Herausforderung stellt dabei die geeignete Modellierung und Beschreibung des Kontexts dar. Diese muss eine automatische, globale Interpretation ermöglichen, ohne dass auf ein allgemeingültiges semantisches Beschreibungsschema zurückgegriffen werden kann. Die Vereinbarung eines solchen allgemeingültigen Schemas ist in einem derart umfangreichen, heterogenen und autonomen Rahmen, wie ihn das WWW darstellt, nicht möglich.
    Theme
    Semantic Web
  18. Toth, M.B.; Emery, D.: Applying DCMI elements to digital images and text in the Archimedes Palimpsest Program (2008) 0.02
    0.02261011 = product of:
      0.060293626 = sum of:
        0.017459875 = weight(_text_:web in 2651) [ClassicSimilarity], result of:
          0.017459875 = score(doc=2651,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 2651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
        0.032782245 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
          0.032782245 = score(doc=2651,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.34936053 = fieldWeight in 2651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2651)
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 2651) [ClassicSimilarity], result of:
              0.020103013 = score(doc=2651,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The digitized version of the only extant copy of Archimedes' key mathematical and scientific works contains over 6,500 images and 130 pages of transcriptions. Metadata is essential for managing, integrating and accessing these digital resources in the Web 2.0 environment. The Dublin Core Metadata Element Set meets many of our needs. It offers the needed flexibility and applicability to a variety of data sets containing different texts and images in a dynamic technical environment. The program team has continued to refine its data dictionary and elements based on the Dublin Core standard and feedback from the Dublin Core community since the 2006 Dublin Core Conference. This presentation cites the application and utility of the DCMI Standards during the final phase of this decade-long program. Since the 2006 conference, the amount of data has grown tenfold with new imaging techniques. Use of the DCMI Standards for integration across digital images and transcriptions will allow the hosting and integration of this data set and other cultural works across service providers, libraries and cultural institutions.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  19. Coleman, A.S.: From cataloging to metadata : Dublin Core records for the library catalog (2005) 0.02
    0.021848556 = product of:
      0.08739422 = sum of:
        0.045056276 = weight(_text_:wide in 5722) [ClassicSimilarity], result of:
          0.045056276 = score(doc=5722,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 5722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
        0.042337947 = weight(_text_:web in 5722) [ClassicSimilarity], result of:
          0.042337947 = score(doc=5722,freq=6.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.43716836 = fieldWeight in 5722, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5722)
      0.25 = coord(2/8)
    
    Abstract
    The Dublin Core is an international standard for describing and cataloging all kinds of information resources: books, articles, videos, and World Wide Web (web) resources. Sixteen Dublin Core (DC) elements and the steps for cataloging web resources using these elements and minimal controlled values are discussed, general guidelines for metadata creation are highlighted, a worksheet is provided to create the DC metadata records for the library catalog, and sample resource descriptions in DC are included.
  20. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.02
    0.021721518 = product of:
      0.057924047 = sum of:
        0.024691992 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.024691992 = score(doc=3524,freq=4.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.023180548 = weight(_text_:data in 3524) [ClassicSimilarity], result of:
          0.023180548 = score(doc=3524,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24703519 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.010051507 = product of:
          0.020103013 = sum of:
            0.020103013 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
              0.020103013 = score(doc=3524,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.19345059 = fieldWeight in 3524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3524)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22

Authors

Years

Languages

Types

  • a 279
  • el 47
  • m 18
  • s 16
  • x 3
  • b 2
  • n 2
  • More… Less…

Subjects