Search (35 results, page 1 of 2)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Baker, T.: ¬A grammar of Dublin Core (2000) 0.04
    0.035738762 = product of:
      0.071477525 = sum of:
        0.03853567 = weight(_text_:wide in 1236) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1236,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.020906283 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
          0.020906283 = score(doc=1236,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1236)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.024071148 = score(doc=1236,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  2. Craven, T.: Changes in metatag descriptions over time (2001) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 6601) [ClassicSimilarity], result of:
          0.067437425 = score(doc=6601,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 6601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6601)
        0.036585998 = weight(_text_:web in 6601) [ClassicSimilarity], result of:
          0.036585998 = score(doc=6601,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 6601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6601)
      0.33333334 = coord(2/6)
    
    Abstract
    Four sets of Web pages previously visited in the summer of 2000 were revisited one year later. Of 707 pages containing metatag descriptions in 2000, 586 retained descriptions in 2001, and, of 1,230 pages lacking descriptions in 2000, 101 had descriptions in 2001. Home pages appeared to both lose and change descriptions more than other pages, with about 19% of descriptions changed in the two sets where home pages predominated versus about 12% in the other two sets. About two-thirds of changes involved minor revisions, and changes fell into a wide variety of categories. Some implications for software to assist in description revision are discussed
  3. Metadata practices on the cutting edge (2004) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 2335) [ClassicSimilarity], result of:
          0.067437425 = score(doc=2335,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
        0.036585998 = weight(_text_:web in 2335) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2335,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
      0.33333334 = coord(2/6)
    
    Abstract
    The PowerPoint presentations from this one-day workshop on emerging metadata practices are available at this web site. Topics include metadata quality, interoperability, linking metadata, metadata for image collections, RSS, MODS, METS, and MPEG-21. Contributors include representatives from OCLC, CrossRef, the Library of Congress, universities and the private sector. Given the wide range of presentations, if you're interested in metadata you can likely find something of interest here, but no single topic is explored in much depth, and you are sometimes left wondering what the speaker said about a particular slide if there are no accompanying notes.
  4. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.03
    0.032941855 = product of:
      0.09882557 = sum of:
        0.062718846 = weight(_text_:web in 6048) [ClassicSimilarity], result of:
          0.062718846 = score(doc=6048,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
        0.03610672 = product of:
          0.07221344 = sum of:
            0.07221344 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07221344 = score(doc=6048,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  5. Baker, T.; Rühle, S.: Übersetzung des Dublin Core Metadata Initiative Abstract Model (DCAM) (2009) 0.02
    0.02476748 = product of:
      0.07430244 = sum of:
        0.04816959 = weight(_text_:wide in 3230) [ClassicSimilarity], result of:
          0.04816959 = score(doc=3230,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 3230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3230)
        0.026132854 = weight(_text_:web in 3230) [ClassicSimilarity], result of:
          0.026132854 = score(doc=3230,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 3230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3230)
      0.33333334 = coord(2/6)
    
    Abstract
    Dieses Dokument beschreibt das Abstraktmodell für Dublin-Core-Metadaten. Ziel des Dokuments ist es vor allem, die Elemente und Strukturen, die in Dublin-Core-Metadaten verwendet werden, zu benennen. Das Dokument definiert die verwendeten Elemente und beschreibt, wie sie miteinander kombiniert werden, um Informationsstrukturen zu bilden. Es stellt ein von jeglicher besonderen Codierungssyntax unabhängiges Informationsmodell dar. Ein solches Informationsmodell macht es uns möglich, die Beschreibungen, die wir codieren wollen, besser zu verstehen und erleichtert die Entwicklung besserer Mappings und syntaxübergreifender Datenkonvertierungen. Dieses Dokument richtet sich in erster Linie an Entwickler von Softwareanwendungen, die Dublin-Core-Metadaten unterstützen, an Personen, die neue syntaktische Codierungsrichtlinien für Dublin-Core-Metadaten entwickeln und an Personen, die Metadatenprofile entwickeln, die auf DCMI- oder anderen kompatibelen Vokabularen basieren. Das DCMI-Abstraktmodell basiert auf der Arbeit des World Wide Web Consortium (W3C) am Resource Description Framework (RDF). Die Verwendung von Konzepten aus RDF wird unten im Abschnitt 5 zusammengefasst. Das DCMI-Abstraktmodell wird hier mit UML-Klassen-Diagrammen dargestellt. Für Leser, die solche UML-Klassen-Diagramme nicht kennen, eine kurze Anleitung: Linien, die in einem Maßpfeil enden, werden als 'ist' oder 'ist eine' gelesen (z.B. "value ist eine resource"). Linien, die mit einer Raute beginnen, werden als 'hat' oder 'hat eine' gelesen (z.B. "statement hat einen property URI"). Andere Beziehungen werden entsprechend gekennzeichnet. Die kursiv geschriebenen Wörter und Phrasen in diesem Dokument werden im Abschnitt 7 ("Terminologie") definiert. Wir danken Dan Brickley, Rachel Heery, Alistair Miles, Sarah Pulis, den Mitgliedern des DCMI Usage Board und den Mitgliedern der DCMI Architecture Community für ihr Feedback zu den vorangegangenen Versionen dieses Dokuments.
  6. Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 1199) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1199,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
        0.020906283 = weight(_text_:web in 1199) [ClassicSimilarity], result of:
          0.020906283 = score(doc=1199,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 1199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1199)
      0.33333334 = coord(2/6)
    
    Abstract
    On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.
  7. Duval, E.; Hodgins, W.; Sutton, S.; Weibel, S.L.: Metadata principles and practicalities (2002) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 1208) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1208,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1208)
        0.020906283 = weight(_text_:web in 1208) [ClassicSimilarity], result of:
          0.020906283 = score(doc=1208,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 1208, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1208)
      0.33333334 = coord(2/6)
    
    Abstract
    For those of us still struggling with basic concepts regarding metadata in this brave new world in which cataloging means much more than MARC, an article like this is welcome indeed. In this 30.000-foot overview of the metadata landscape, broad issues such as modularity, namespaces, extensibility, refinement, and multilingualism are discussed. In addition, "practicalities" like application profiles, syntax and semantics, metadata registries, and automated generation of metadata are explained. Although this piece is not exhaustive of high-level metadata issues, it is nonetheless a useful description of some of the most important issues surrounding metadata creation and use. The rapid changes in the means of information access occasioned by the emergence of the World Wide Web have spawned an upheaval in the means of describing and managing information resources. Metadata is a primary tool in this work, and an important link in the value chain of knowledge economies. Yet there is much confusion about how metadata should be integrated into information systems. How is it to be created or extended? Who will manage it? How can it be used and exchanged? Whence comes its authority? Can different metadata standards be used together in a given environment? These and related questions motivate this paper. The authors hope to make explicit the strong foundations of agreement shared by two prominent metadata Initiatives: the Dublin Core Metadata Initiative (DCMI) and the Institute for Electrical and Electronics Engineers (IEEE) Learning Object Metadata (LOM) Working Group. This agreement emerged from a joint metadata taskforce meeting in Ottawa in August, 2001. By elucidating shared principles and practicalities of metadata, we hope to raise the level of understanding among our respective (and shared) constituents, so that all stakeholders can move forward more decisively to address their respective problems. The ideas in this paper are divided into two categories. Principles are those concepts judged to be common to all domains of metadata and which might inform the design of any metadata schema or application. Practicalities are the rules of thumb, constraints, and infrastructure issues that emerge from bringing theory into practice in the form of useful and sustainable systems.
  8. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.017333968 = product of:
      0.0520019 = sum of:
        0.036957435 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.036957435 = score(doc=4550,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.030088935 = score(doc=4550,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  9. Dillon, M.: Metadata for Web resources : how metadata works on the Web (2000) 0.01
    0.0147829745 = product of:
      0.08869784 = sum of:
        0.08869784 = weight(_text_:web in 6798) [ClassicSimilarity], result of:
          0.08869784 = score(doc=6798,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6119082 = fieldWeight in 6798, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6798)
      0.16666667 = coord(1/6)
    
  10. Final Report to the ALCTS CCS SAC Subcommittee on Metadata and Subject Analysis (2001) 0.01
    0.010994123 = product of:
      0.065964736 = sum of:
        0.065964736 = product of:
          0.13192947 = sum of:
            0.13192947 = weight(_text_:programs in 5016) [ClassicSimilarity], result of:
              0.13192947 = score(doc=5016,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5123863 = fieldWeight in 5016, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5016)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The charge for the SAC Subcommittee on Metadata and Subject Analysis states: Identify and study the major issues surrounding the use of metadata in the subject analysis and classification of digital resources. Provide discussion forums and programs relevant to these issues. Discussion forums should begin by Annual 1998. The continued need for the subcommittee should be reexamined by SAC no later than 2001.
  11. McCallum, S.M.: Extending MARC for bibliographic control in the Web environment : Challenges and alternatives (2000) 0.01
    0.010453141 = product of:
      0.062718846 = sum of:
        0.062718846 = weight(_text_:web in 6803) [ClassicSimilarity], result of:
          0.062718846 = score(doc=6803,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 6803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6803)
      0.16666667 = coord(1/6)
    
  12. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.01
    0.009855317 = product of:
      0.059131898 = sum of:
        0.059131898 = weight(_text_:web in 4840) [ClassicSimilarity], result of:
          0.059131898 = score(doc=4840,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 4840, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
      0.16666667 = coord(1/6)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
    Theme
    Semantic Web
  13. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.01
    0.009855317 = product of:
      0.059131898 = sum of:
        0.059131898 = weight(_text_:web in 1075) [ClassicSimilarity], result of:
          0.059131898 = score(doc=1075,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
      0.16666667 = coord(1/6)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  14. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.00973914 = product of:
      0.05843484 = sum of:
        0.05843484 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.05843484 = score(doc=3895,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.16666667 = coord(1/6)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  15. Husevag, A.-S.R.: Named entities in indexing : a case study of TV subtitles and metadata records (2016) 0.01
    0.009717524 = product of:
      0.058305144 = sum of:
        0.058305144 = product of:
          0.11661029 = sum of:
            0.11661029 = weight(_text_:programs in 3105) [ClassicSimilarity], result of:
              0.11661029 = score(doc=3105,freq=4.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.45288983 = fieldWeight in 3105, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3105)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper explores the possible role of named entities in an automatic index-ing process, based on text in subtitles. This is done by analyzing entity types, name den-sity and name frequencies in subtitles and metadata records from different TV programs. The name density in metadata records is much higher than the name density in subtitles, and named entities with high frequencies in the subtitles are more likely to be mentioned in the metadata records. Personal names, geographical names and names of organizations where the most prominent entity types in both the news subtitles and news metadata, while persons, works and locations are the most prominent in culture programs.
  16. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.01
    0.0091464985 = product of:
      0.05487899 = sum of:
        0.05487899 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
          0.05487899 = score(doc=1210,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37859887 = fieldWeight in 1210, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1210)
      0.16666667 = coord(1/6)
    
    Abstract
    The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example. This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries. Metadata schema registries are, in effect, databases of schemas that can trace an historical line back to shared data dictionaries and the registration process encouraged by the ISO/IEC 11179 community. New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community. Examples of current registry activity include:
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
    Theme
    Semantic Web
  17. What is Schema.org? (2011) 0.01
    0.009052687 = product of:
      0.054316122 = sum of:
        0.054316122 = weight(_text_:web in 4437) [ClassicSimilarity], result of:
          0.054316122 = score(doc=4437,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 4437, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4437)
      0.16666667 = coord(1/6)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  18. Greenberg, J.; Pattuelli, M.; Parsia, B.; Robertson, W.: Author-generated Dublin Core Metadata for Web Resources : A Baseline Study in an Organization (2002) 0.01
    0.008710952 = product of:
      0.052265707 = sum of:
        0.052265707 = weight(_text_:web in 1281) [ClassicSimilarity], result of:
          0.052265707 = score(doc=1281,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 1281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=1281)
      0.16666667 = coord(1/6)
    
  19. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.01
    0.008623403 = product of:
      0.05174041 = sum of:
        0.05174041 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.05174041 = score(doc=5896,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 5896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
  20. Frodl, C.; Gros, A.; Rühle, S.: Übersetzung des Singapore Framework für Dublin-Core-Anwendungsprofile (2009) 0.01
    0.008623403 = product of:
      0.05174041 = sum of:
        0.05174041 = weight(_text_:web in 3229) [ClassicSimilarity], result of:
          0.05174041 = score(doc=3229,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35694647 = fieldWeight in 3229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3229)
      0.16666667 = coord(1/6)
    
    Abstract
    Das Singapore Framework für Dublin-Core-Anwendungsprofile nennt die Rahmenbedingungen um Metadatenanwendungen möglichst interoperabel zu gestalten und so zu dokumentieren, dass sie nachnutzbar sind. Es definiert die Komponenten, die erforderlich und hilfreich sind, um ein Anwendungsprofil zu dokumentieren und es beschreibt, wie sich diese dokumentarischen Standards gegenüber Standard-Domain-Modellen und den Semantic-Web-Standards verhalten. Das Singapore Framework ist die Grundlage für die Beurteilung von Anwendungsprofilen in Hinblick auf Vollständigkeit der Dokumentation und auf Übereinstimmung mit den Prinzipien der Web-Architektur. Dieses Dokument bietet eine kurze Übersicht über das Singapore Framework. Weitere Dokumente, die als Anleitung für die Erstellung der erforderlichen Dokumentation dienen, sind in Planung.