Search (76 results, page 1 of 4)

  • × type_ss:"el"
  • × theme_ss:"Metadaten"
  1. Tonkin, E.; Baptista, A.A.; Hooland, S. van; Resmini, A.; Mendéz, E.; Neville, L.: Kinds of Tags : a collaborative research study on tag usage and structure (2007) 0.03
    0.034801044 = product of:
      0.05220156 = sum of:
        0.0065699257 = weight(_text_:a in 531) [ClassicSimilarity], result of:
          0.0065699257 = score(doc=531,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12611452 = fieldWeight in 531, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=531)
        0.045631636 = product of:
          0.09126327 = sum of:
            0.09126327 = weight(_text_:de in 531) [ClassicSimilarity], result of:
              0.09126327 = score(doc=531,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47003788 = fieldWeight in 531, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=531)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    KoT (Kinds of Tags) is an ongoing joint collaborative research effort with many participants worldwide, including the University of Minho, UKOLN, the University of Bologna, the Université Libre de Bruxelles and La Universidad Carlos III de Madrid. It is focused on the analysis of tags that are in common use in the practice of social tagging, with the aim of discovering how easily tags can be 'normalised' for interoperability with standard metadata environments such as the DC Metadata Terms.
  2. Golub, K.; Moon, J.; Nielsen, M.L.; Tudhope, D.: EnTag: Enhanced Tagging for Discovery (2008) 0.03
    0.02589091 = product of:
      0.038836364 = sum of:
        0.0065699257 = weight(_text_:a in 2294) [ClassicSimilarity], result of:
          0.0065699257 = score(doc=2294,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12611452 = fieldWeight in 2294, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2294)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 2294) [ClassicSimilarity], result of:
              0.064532876 = score(doc=2294,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 2294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose: Investigate the combination of controlled and folksonomy approaches to support resource discovery in repositories and digital collections. Aim: Investigate whether use of an established controlled vocabulary can help improve social tagging for better resource discovery. Objectives: (1) Investigate indexing aspects when using only social tagging versus when using social tagging with suggestions from a controlled vocabulary; (2) Investigate above in two different contexts: tagging by readers and tagging by authors; (3) Investigate influence of only social tagging versus social tagging with a controlled vocabulary on retrieval. - Vgl.: http://www.ukoln.ac.uk/projects/enhanced-tagging/.
    Source
    http://dc2008.de/wp-content/uploads/2008/09/nkos.pdf
  3. Understanding metadata (2004) 0.02
    0.019862993 = product of:
      0.029794488 = sum of:
        0.0053093014 = weight(_text_:a in 2686) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=2686,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 2686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2686)
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.048970375 = score(doc=2686,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  4. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.016838789 = product of:
      0.025258183 = sum of:
        0.0099549405 = weight(_text_:a in 4550) [ClassicSimilarity], result of:
          0.0099549405 = score(doc=4550,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19109234 = fieldWeight in 4550, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.015303242 = product of:
          0.030606484 = sum of:
            0.030606484 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.030606484 = score(doc=4550,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
    Type
    a
  5. Baker, T.: ¬A grammar of Dublin Core (2000) 0.02
    0.01607637 = product of:
      0.024114553 = sum of:
        0.01187196 = weight(_text_:a in 1236) [ClassicSimilarity], result of:
          0.01187196 = score(doc=1236,freq=40.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 1236, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.012242594 = product of:
          0.024485188 = sum of:
            0.024485188 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.024485188 = score(doc=1236,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
    Type
    a
  6. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.02
    0.015996836 = product of:
      0.023995254 = sum of:
        0.0056313644 = weight(_text_:a in 266) [ClassicSimilarity], result of:
          0.0056313644 = score(doc=266,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10809815 = fieldWeight in 266, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=266)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.03672778 = score(doc=266,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 5.2021 12:43:05
    Location
    A
    Type
    a
  7. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.012242593 = product of:
      0.03672778 = sum of:
        0.03672778 = product of:
          0.07345556 = sum of:
            0.07345556 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07345556 = score(doc=6048,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2007 15:41:14
  8. Weibel, S.L.: Border crossings : reflections on a decade of metadata consensus building (2005) 0.00
    0.003988117 = product of:
      0.01196435 = sum of:
        0.01196435 = weight(_text_:a in 1187) [ClassicSimilarity], result of:
          0.01196435 = score(doc=1187,freq=26.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22966442 = fieldWeight in 1187, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.33333334 = coord(1/3)
    
    Abstract
    In June of this year, I performed my final official duties as part of the Dublin Core Metadata Initiative management team. It is a happy irony to affix a seal on that service in this journal, as both D-Lib Magazine and the Dublin Core celebrate their tenth anniversaries. This essay is a personal reflection on some of the achievements and lessons of that decade. The OCLC-NCSA Metadata Workshop took place in March of 1995, and as we tried to understand what it meant and who would care, D-Lib magazine came into being and offered a natural venue for sharing our work. I recall a certain skepticism when Bill Arms said "We want D-Lib to be the first place people look for the latest developments in digital library research." These were the early days in the evolution of electronic publishing, and the goal was ambitious. By any measure, a decade of high-quality electronic publishing is an auspicious accomplishment, and D-Lib (and its host, CNRI) deserve congratulations for having achieved their goal. I am grateful to have been a contributor. That first DC workshop led to further workshops, a community, a variety of standards in several countries, an ISO standard, a conference series, and an international consortium. Looking back on this evolution is both satisfying and wistful. While I am pleased that the achievements are substantial, the unmet challenges also provide a rich till in which to cultivate insights on the development of digital infrastructure.
    Type
    a
  9. Apps, A.; MacIntyre, R.; Heery, R.; Patel, M.; Salokhe, G.: Zetoc : a Dublin Core Based Current Awareness Service (2002) 0.00
    0.0038316585 = product of:
      0.011494976 = sum of:
        0.011494976 = weight(_text_:a in 1280) [ClassicSimilarity], result of:
          0.011494976 = score(doc=1280,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22065444 = fieldWeight in 1280, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1280)
      0.33333334 = coord(1/3)
    
    Type
    a
  10. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.00
    0.003793148 = product of:
      0.011379444 = sum of:
        0.011379444 = weight(_text_:a in 1155) [ClassicSimilarity], result of:
          0.011379444 = score(doc=1155,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21843673 = fieldWeight in 1155, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
    Type
    a
  11. Wolfe, EW.: a case study in automated metadata enhancement : Natural Language Processing in the humanities (2019) 0.00
    0.003793148 = product of:
      0.011379444 = sum of:
        0.011379444 = weight(_text_:a in 5236) [ClassicSimilarity], result of:
          0.011379444 = score(doc=5236,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21843673 = fieldWeight in 5236, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5236)
      0.33333334 = coord(1/3)
    
    Abstract
    The Black Book Interactive Project at the University of Kansas (KU) is developing an expanded corpus of novels by African American authors, with an emphasis on lesser known writers and a goal of expanding research in this field. Using a custom metadata schema with an emphasis on race-related elements, each novel is analyzed for a variety of elements such as literary style, targeted content analysis, historical context, and other areas. Librarians at KU have worked to develop a variety of computational text analysis processes designed to assist with specific aspects of this metadata collection, including text mining and natural language processing, automated subject extraction based on word sense disambiguation, harvesting data from Wikidata, and other actions.
    Type
    a
  12. Kunze, J.: ¬A Metadata Kernel for Electronic Permanence (2002) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 1107) [ClassicSimilarity], result of:
          0.011262729 = score(doc=1107,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 1107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1107)
      0.33333334 = coord(1/3)
    
    Type
    a
  13. METS: an overview & tutorial : Metadata Encoding & Transmission Standard (METS) (2001) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 1323) [ClassicSimilarity], result of:
          0.011262729 = score(doc=1323,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 1323, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1323)
      0.33333334 = coord(1/3)
    
    Abstract
    Maintaining a library of digital objects of necessaryy requires maintaining metadata about those objects. The metadata necessary for successful management and use of digital objeets is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. While a library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note that the book was produced using a Ryobi offset press. The Same cannot be said for a digital version of the saure book. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
  14. Weibel, S.: ¬A proposed convention for embedding metadata in HTML <June 2, 1996> (1996) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 5971) [ClassicSimilarity], result of:
          0.010618603 = score(doc=5971,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 5971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=5971)
      0.33333334 = coord(1/3)
    
  15. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 4840) [ClassicSimilarity], result of:
          0.010618603 = score(doc=4840,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 4840, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
      0.33333334 = coord(1/3)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
  16. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.00
    0.0035117732 = product of:
      0.010535319 = sum of:
        0.010535319 = weight(_text_:a in 3523) [ClassicSimilarity], result of:
          0.010535319 = score(doc=3523,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20223314 = fieldWeight in 3523, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
      0.33333334 = coord(1/3)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
    Type
    a
  17. Weibel, S.L.; Koch, T.: ¬The Dublin Core Metatdata Initiative : mission, current activities, and future directions (2000) 0.00
    0.0034978096 = product of:
      0.010493428 = sum of:
        0.010493428 = weight(_text_:a in 1237) [ClassicSimilarity], result of:
          0.010493428 = score(doc=1237,freq=20.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20142901 = fieldWeight in 1237, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1237)
      0.33333334 = coord(1/3)
    
    Abstract
    Metadata is a keystone component for a broad spectrum of applications that are emerging on the Web to help stitch together content and services and make them more visible to users. The Dublin Core Metadata Initiative (DCMI) has led the development of structured metadata to support resource discovery. This international community has, over a period of 6 years and 8 workshops, brought forth: * A core standard that enhances cross-disciplinary discovery and has been translated into 25 languages to date; * A conceptual framework that supports the modular development of auxiliary metadata components; * An open consensus building process that has brought to fruition Australian, European and North American standards with promise as a global standard for resource discovery; * An open community of hundreds of practitioners and theorists who have found a common ground of principles, procedures, core semantics, and a framework to support interoperable metadata. The 8th Dublin Core Metadata Workshop capped an active year of progress that included standardization of the 15-element core foundation and approval of an initial array of Dublin Core Qualifiers. While there is important work to be done to promote stability and increased adoption of the Dublin Core, the time has come to look beyond the core elements towards a broader metadata agenda. This report describes the new mission statement of the Dublin Core Metadata Initiative (DCMI) that supports the agenda, recapitulates the important milestones of the year 2000, outlines activities of the 8th DCMI workshop in Ottawa, and summarizes the 2001 workplan.
    Type
    a
  18. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.00
    0.003462655 = product of:
      0.010387965 = sum of:
        0.010387965 = weight(_text_:a in 5896) [ClassicSimilarity], result of:
          0.010387965 = score(doc=5896,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 5896, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
    Type
    a
  19. Blanchi, C.; Petrone, J.: Distributed interoperable metadata registry (2001) 0.00
    0.003462655 = product of:
      0.010387965 = sum of:
        0.010387965 = weight(_text_:a in 1228) [ClassicSimilarity], result of:
          0.010387965 = score(doc=1228,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 1228, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1228)
      0.33333334 = coord(1/3)
    
    Abstract
    Interoperability between digital libraries depends on effective sharing of metadata. Successful sharing of metadata requires common standards for metadata exchange. Previous efforts have focused on either defining a single metadata standard, such as Dublin Core, or building digital library middleware, such as Z39.50 or Stanford's Digital Library Interoperability Protocol. In this article, we propose a distributed architecture for managing metadata and metadata schema. Instead of normalizing all metadata and schema to a single format, we have focused on building a middleware framework that tolerates heterogeneity. By providing facilities for typing and dynamic conversion of metadata, our system permits continual introduction of new forms of metadata with minimal impact on compatibility.
    Type
    a
  20. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.00
    0.0033183135 = product of:
      0.0099549405 = sum of:
        0.0099549405 = weight(_text_:a in 5443) [ClassicSimilarity], result of:
          0.0099549405 = score(doc=5443,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19109234 = fieldWeight in 5443, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
      0.33333334 = coord(1/3)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
    Type
    a

Years

Languages

  • e 66
  • d 10

Types

  • a 56
  • n 2
  • s 1
  • More… Less…