Search (1584 results, page 1 of 80)

  • × year_i:[2010 TO 2020}
  1. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.14
    0.14154318 = product of:
      0.28308636 = sum of:
        0.19869895 = weight(_text_:markup in 2606) [ClassicSimilarity], result of:
          0.19869895 = score(doc=2606,freq=4.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.7189135 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.08438742 = product of:
          0.12658113 = sum of:
            0.08670129 = weight(_text_:language in 2606) [ClassicSimilarity], result of:
              0.08670129 = score(doc=2606,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5255505 = fieldWeight in 2606, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
            0.039879844 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.039879844 = score(doc=2606,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  2. Iorio, A.D.; Peroni, S.; Poggi, F.; Vitali, F.: Dealing with structural patterns of XML documents (2014) 0.09
    0.08591112 = product of:
      0.17182223 = sum of:
        0.12042975 = weight(_text_:markup in 1345) [ClassicSimilarity], result of:
          0.12042975 = score(doc=1345,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.43572736 = fieldWeight in 1345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.046875 = fieldNorm(doc=1345)
        0.05139249 = product of:
          0.07708873 = sum of:
            0.042906005 = weight(_text_:language in 1345) [ClassicSimilarity], result of:
              0.042906005 = score(doc=1345,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 1345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1345)
            0.034182724 = weight(_text_:22 in 1345) [ClassicSimilarity], result of:
              0.034182724 = score(doc=1345,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23214069 = fieldWeight in 1345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1345)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Evaluating collections of XML documents without paying attention to the schema they were written in may give interesting insights into the expected characteristics of a markup language, as well as any regularity that may span vocabularies and languages, and that are more fundamental and frequent than plain content models. In this paper we explore the idea of structural patterns in XML vocabularies, by examining the characteristics of elements as they are used, rather than as they are defined. We introduce from the ground up a formal theory of 8 plus 3 structural patterns for XML elements, and verify their identifiability in a number of different XML vocabularies. The results allowed the creation of visualization and content extraction tools that are completely independent of the schema and without any previous knowledge of the semantics and organization of the XML vocabulary of the documents.
    Date
    22. 8.2014 17:08:49
  3. What is Schema.org? (2011) 0.07
    0.07374786 = product of:
      0.29499143 = sum of:
        0.29499143 = weight(_text_:markup in 4437) [ClassicSimilarity], result of:
          0.29499143 = score(doc=4437,freq=12.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            1.0673097 = fieldWeight in 4437, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.046875 = fieldNorm(doc=4437)
      0.25 = coord(1/4)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  4. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.07
    0.07167879 = product of:
      0.14335757 = sum of:
        0.10035812 = weight(_text_:markup in 3373) [ClassicSimilarity], result of:
          0.10035812 = score(doc=3373,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.36310613 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.042999458 = product of:
          0.064499184 = sum of:
            0.03575501 = weight(_text_:language in 3373) [ClassicSimilarity], result of:
              0.03575501 = score(doc=3373,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
            0.028744178 = weight(_text_:29 in 3373) [ClassicSimilarity], result of:
              0.028744178 = score(doc=3373,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.19432661 = fieldWeight in 3373, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
    Date
    31. 1.2017 13:29:56
  5. Salminen, A.; Jauhiainen, E.; Nurmeksela, R.: ¬A life cycle model of XML documents (2014) 0.07
    0.06736588 = product of:
      0.13473175 = sum of:
        0.12042975 = weight(_text_:markup in 1553) [ClassicSimilarity], result of:
          0.12042975 = score(doc=1553,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.43572736 = fieldWeight in 1553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.046875 = fieldNorm(doc=1553)
        0.014302002 = product of:
          0.042906005 = sum of:
            0.042906005 = weight(_text_:language in 1553) [ClassicSimilarity], result of:
              0.042906005 = score(doc=1553,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 1553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1553)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Electronic documents produced in business processes are valuable information resources for organizations. In many cases they have to be accessible long after the life of the business processes or information systems in connection with which they were created. To improve the management and preservation of documents, organizations are deploying Extensible Markup Language (XML) as a standardized format for documents. The goal of this paper is to increase understanding of XML document management and provide a framework to enable the analysis and description of the management of XML documents throughout their life. We followed the design science approach. We introduce a document life cycle model consisting of five phases. For each of the phases we describe the typical activities related to the management of XML documents. Furthermore, we also identify the typical actors, systems, and types of content items associated with the activities of the phases. We demonstrate the use of the model in two case studies: one concerning the State Budget Proposal of the Finnish government and the other concerning a faculty council meeting agenda at a university.
  6. Moilanen, K.; Niemi, T.; Näppilä, T.; Kuru, M.: ¬A visual XML dataspace approach for satisfying ad hoc information needs (2015) 0.07
    0.06736588 = product of:
      0.13473175 = sum of:
        0.12042975 = weight(_text_:markup in 2269) [ClassicSimilarity], result of:
          0.12042975 = score(doc=2269,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.43572736 = fieldWeight in 2269, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.046875 = fieldNorm(doc=2269)
        0.014302002 = product of:
          0.042906005 = sum of:
            0.042906005 = weight(_text_:language in 2269) [ClassicSimilarity], result of:
              0.042906005 = score(doc=2269,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 2269, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2269)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Dataspace systems constitute a recent data management approach that supports better cooperation among autonomous and heterogeneous data sources with which the user is initially unfamiliar. A central idea is to gradually increase the user's knowledge about the contents, structures, and semantics of the data sources in the dataspace. Without this knowledge, the user is not able to make sophisticated queries. The dataspace systems proposed so far are usually application specific. In contrast, our idea in this paper is to develop an application-independent extensible markup language (XML) dataspace system with versatile facilities. Unlike the other proposed dataspace systems, we show that it is possible to build an interface based on conventional visual tools in terms of which the user can satisfy his or her sophisticated information needs. In our system, the user does not need to master programming techniques nor the XML syntax, which provides a good starting point for its declarative use.
  7. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.07
    0.06596371 = product of:
      0.13192742 = sum of:
        0.12042975 = weight(_text_:markup in 2116) [ClassicSimilarity], result of:
          0.12042975 = score(doc=2116,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.43572736 = fieldWeight in 2116, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
        0.011497671 = product of:
          0.03449301 = sum of:
            0.03449301 = weight(_text_:29 in 2116) [ClassicSimilarity], result of:
              0.03449301 = score(doc=2116,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.23319192 = fieldWeight in 2116, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2116)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
    Source
    Code4Lib journal. Issue 29(2015), [http://journal.code4lib.org/issues/issues/issue29]
  8. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.06
    0.06351315 = product of:
      0.1270263 = sum of:
        0.11354225 = weight(_text_:markup in 5329) [ClassicSimilarity], result of:
          0.11354225 = score(doc=5329,freq=4.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.4108077 = fieldWeight in 5329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
        0.013484059 = product of:
          0.040452175 = sum of:
            0.040452175 = weight(_text_:language in 5329) [ClassicSimilarity], result of:
              0.040452175 = score(doc=5329,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2452058 = fieldWeight in 5329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5329)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    LCSH
    RDF (Document markup language)
    Subject
    RDF (Document markup language)
  9. Cui, H.: CharaParser for fine-grained semantic annotation of organism morphological descriptions (2012) 0.06
    0.05613823 = product of:
      0.11227646 = sum of:
        0.10035812 = weight(_text_:markup in 45) [ClassicSimilarity], result of:
          0.10035812 = score(doc=45,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.36310613 = fieldWeight in 45, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=45)
        0.011918336 = product of:
          0.03575501 = sum of:
            0.03575501 = weight(_text_:language in 45) [ClassicSimilarity], result of:
              0.03575501 = score(doc=45,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 45, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=45)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Biodiversity information organization is looking beyond the traditional document-level metadata approach and has started to look into factual content in textual documents to support more intelligent and semantic-based access. This article reports the development and evaluation of CharaParser, a software application for semantic annotation of morphological descriptions. CharaParser annotates semistructured morphological descriptions in such a detailed manner that all stated morphological characters of an organ are marked up in Extensible Markup Language format. Using an unsupervised machine learning algorithm and a general purpose syntactic parser as its key annotation tools, CharaParser requires minimal additional knowledge engineering work and seems to perform well across different description collections and/or taxon groups. The system has been formally evaluated on over 1,000 sentences randomly selected from Volume 19 of Flora of North American and Part H of Treatise on Invertebrate Paleontology. CharaParser reaches and exceeds 90% in sentence-wise recall and precision, exceeding other similar systems reported in the literature. It also significantly outperforms a heuristic rule-based system we developed earlier. Early evidence that enriching the lexicon of a syntactic parser with domain terms alone may be sufficient to adapt the parser for the biodiversity domain is also observed and may have significant implications.
  10. Li, Z.: ¬A domain specific search engine with explicit document relations (2013) 0.06
    0.05613823 = product of:
      0.11227646 = sum of:
        0.10035812 = weight(_text_:markup in 1210) [ClassicSimilarity], result of:
          0.10035812 = score(doc=1210,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.36310613 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
        0.011918336 = product of:
          0.03575501 = sum of:
            0.03575501 = weight(_text_:language in 1210) [ClassicSimilarity], result of:
              0.03575501 = score(doc=1210,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 1210, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The current web consists of documents that are highly heterogeneous and hard for machines to understand. The Semantic Web is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
  11. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.04
    0.043941326 = product of:
      0.08788265 = sum of:
        0.080286495 = weight(_text_:markup in 1096) [ClassicSimilarity], result of:
          0.080286495 = score(doc=1096,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.2904849 = fieldWeight in 1096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.03125 = fieldNorm(doc=1096)
        0.0075961608 = product of:
          0.022788482 = sum of:
            0.022788482 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.022788482 = score(doc=1096,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
  12. Iorio, A. di; Peroni, S.; Vitali, F.: ¬A Semantic Web approach to everyday overlapping markup (2011) 0.04
    0.043456346 = product of:
      0.17382538 = sum of:
        0.17382538 = weight(_text_:markup in 4749) [ClassicSimilarity], result of:
          0.17382538 = score(doc=4749,freq=6.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.62891835 = fieldWeight in 4749, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4749)
      0.25 = coord(1/4)
    
    Abstract
    Overlapping structures in XML are not symptoms of a misunderstanding of the intrinsic characteristics of a text document nor evidence of extreme scholarly requirements far beyond those needed by the most common XML-based applications. On the contrary, overlaps have started to appear in a large number of incredibly popular applications hidden under the guise of syntactical tricks to the basic hierarchy of the XML data format. Unfortunately, syntactical tricks have the drawback that the affected structures require complicated workarounds to support even the simplest query or usage. In this article, we present Extremely Annotational Resource Description Framework (RDF) Markup (EARMARK), an approach to overlapping markup that simplifies and streamlines the management of multiple hierarchies on the same content, and provides an approach to sophisticated queries and usages over such structures without the need of ad-hoc applications, simply by using Semantic Web tools and languages. We compare how relevant tasks (e.g., the identification of the contribution of an author in a word processor document) are of some substantial complexity when using the original data format and become more or less trivial when using EARMARK. We finally evaluate positively the memory and disk requirements of EARMARK documents in comparison to Open Office and Microsoft Word XML-based formats.
  13. Pfeiffer, S.: Entwicklung einer Ontologie für die wissensbasierte Erschließung des ISDC-Repository und die Visualisierung kontextrelevanter semantischer Zusammenhänge (2010) 0.04
    0.041024618 = product of:
      0.082049236 = sum of:
        0.07025068 = weight(_text_:markup in 4658) [ClassicSimilarity], result of:
          0.07025068 = score(doc=4658,freq=2.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.2541743 = fieldWeight in 4658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4658)
        0.011798551 = product of:
          0.035395652 = sum of:
            0.035395652 = weight(_text_:language in 4658) [ClassicSimilarity], result of:
              0.035395652 = score(doc=4658,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21455508 = fieldWeight in 4658, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4658)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In der heutigen Zeit sind Informationen jeglicher Art über das World Wide Web (WWW) für eine breite Bevölkerungsschicht zugänglich. Dabei ist es jedoch schwierig die existierenden Dokumente auch so aufzubereiten, dass die Inhalte für Maschinen inhaltlich interpretierbar sind. Das Semantic Web, eine Weiterentwicklung des WWWs, möchte dies ändern, indem es Webinhalte in maschinenverständlichen Formaten anbietet. Dadurch können Automatisierungsprozesse für die Suchanfragenoptimierung und für die Wissensbasenvernetzung eingesetzt werden. Die Web Ontology Language (OWL) ist eine mögliche Sprache, in der Wissen beschrieben und gespeichert werden kann (siehe Kapitel 4 OWL). Das Softwareprodukt Protégé unterstützt den Standard OWL, weshalb ein Großteil der Modellierungsarbeiten in Protégé durchgeführt wurde. Momentan erhält der Nutzer in den meisten Fällen bei der Informationsfindung im Internet lediglich Unterstützung durch eine von Suchmaschinenbetreibern vorgenommene Verschlagwortung des Dokumentinhaltes, d.h. Dokumente können nur nach einem bestimmten Wort oder einer bestimmten Wortgruppe durchsucht werden. Die Ausgabeliste der Suchergebnisse muss dann durch den Nutzer selbst gesichtet und nach Relevanz geordnet werden. Das kann ein sehr zeit- und arbeitsintensiver Prozess sein. Genau hier kann das Semantic Web einen erheblichen Beitrag in der Informationsaufbereitung für den Nutzer leisten, da die Ausgabe der Suchergebnisse bereits einer semantischen Überprüfung und Verknüpfung unterliegt. Deshalb fallen hier nicht relevante Informationsquellen von vornherein bei der Ausgabe heraus, was das Finden von gesuchten Dokumenten und Informationen in einem bestimmten Wissensbereich beschleunigt.
    Um die Vernetzung von Daten, Informationen und Wissen imWWWzu verbessern, werden verschiedene Ansätze verfolgt. Neben dem Semantic Web mit seinen verschiedenen Ausprägungen gibt es auch andere Ideen und Konzepte, welche die Verknüpfung von Wissen unterstützen. Foren, soziale Netzwerke und Wikis sind eine Möglichkeit des Wissensaustausches. In Wikis wird Wissen in Form von Artikeln gebündelt, um es so einer breiten Masse zur Verfügung zu stellen. Hier angebotene Informationen sollten jedoch kritisch hinterfragt werden, da die Autoren der Artikel in den meisten Fällen keine Verantwortung für die dort veröffentlichten Inhalte übernehmen müssen. Ein anderer Weg Wissen zu vernetzen bietet das Web of Linked Data. Hierbei werden strukturierte Daten des WWWs durch Verweise auf andere Datenquellen miteinander verbunden. Der Nutzer wird so im Zuge der Suche auf themenverwandte und verlinkte Datenquellen verwiesen. Die geowissenschaftlichen Metadaten mit ihren Inhalten und Beziehungen untereinander, die beim GFZ unter anderem im Information System and Data Center (ISDC) gespeichert sind, sollen als Ontologie in dieser Arbeit mit den Sprachkonstrukten von OWL modelliert werden. Diese Ontologie soll die Repräsentation und Suche von ISDC-spezifischem Domänenwissen durch die semantische Vernetzung persistenter ISDC-Metadaten entscheidend verbessern. Die in dieser Arbeit aufgezeigten Modellierungsmöglichkeiten, zunächst mit der Extensible Markup Language (XML) und später mit OWL, bilden die existierenden Metadatenbestände auf einer semantischen Ebene ab (siehe Abbildung 2). Durch die definierte Nutzung der Semantik, die in OWL vorhanden ist, kann mittels Maschinen ein Mehrwert aus den Metadaten gewonnen und dem Nutzer zur Verfügung gestellt werden. Geowissenschaftliche Informationen, Daten und Wissen können in semantische Zusammenhänge gebracht und verständlich repräsentiert werden. Unterstützende Informationen können ebenfalls problemlos in die Ontologie eingebunden werden. Dazu gehören z.B. Bilder zu den im ISDC gespeicherten Instrumenten, Plattformen oder Personen. Suchanfragen bezüglich geowissenschaftlicher Phänomene können auch ohne Expertenwissen über Zusammenhänge und Begriffe gestellt und beantwortet werden. Die Informationsrecherche und -aufbereitung gewinnt an Qualität und nutzt die existierenden Ressourcen im vollen Umfang.
  14. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.040543843 = product of:
      0.081087686 = sum of:
        0.066785686 = product of:
          0.20035705 = sum of:
            0.20035705 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.20035705 = score(doc=400,freq=2.0), product of:
                0.35649577 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042049456 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.014302002 = product of:
          0.042906005 = sum of:
            0.042906005 = weight(_text_:language in 400) [ClassicSimilarity], result of:
              0.042906005 = score(doc=400,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  15. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.03
    0.033392843 = product of:
      0.13357137 = sum of:
        0.13357137 = product of:
          0.4007141 = sum of:
            0.4007141 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.4007141 = score(doc=973,freq=2.0), product of:
                0.35649577 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042049456 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  16. Hjoerland, B.: Theories of knowledge organization - theories of knowledge (2017) 0.03
    0.032544672 = product of:
      0.13017869 = sum of:
        0.13017869 = sum of:
          0.05005701 = weight(_text_:language in 3494) [ClassicSimilarity], result of:
            0.05005701 = score(doc=3494,freq=2.0), product of:
              0.16497234 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.042049456 = queryNorm
              0.30342668 = fieldWeight in 3494, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3494)
          0.040241845 = weight(_text_:29 in 3494) [ClassicSimilarity], result of:
            0.040241845 = score(doc=3494,freq=2.0), product of:
              0.14791684 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.042049456 = queryNorm
              0.27205724 = fieldWeight in 3494, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3494)
          0.039879844 = weight(_text_:22 in 3494) [ClassicSimilarity], result of:
            0.039879844 = score(doc=3494,freq=2.0), product of:
              0.14725003 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042049456 = queryNorm
              0.2708308 = fieldWeight in 3494, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3494)
      0.25 = coord(1/4)
    
    Pages
    S.22-36
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  17. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.029003926 = product of:
      0.05800785 = sum of:
        0.04452379 = product of:
          0.13357137 = sum of:
            0.13357137 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.13357137 = score(doc=5820,freq=2.0), product of:
                0.35649577 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042049456 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.013484059 = product of:
          0.040452175 = sum of:
            0.040452175 = weight(_text_:language in 5820) [ClassicSimilarity], result of:
              0.040452175 = score(doc=5820,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.2452058 = fieldWeight in 5820, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  18. OCLC erweitert WorldCat.org um Linked Data (2012) 0.03
    0.028385563 = product of:
      0.11354225 = sum of:
        0.11354225 = weight(_text_:markup in 340) [ClassicSimilarity], result of:
          0.11354225 = score(doc=340,freq=4.0), product of:
            0.27638784 = queryWeight, product of:
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.042049456 = queryNorm
            0.4108077 = fieldWeight in 340, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.572923 = idf(docFreq=167, maxDocs=44218)
              0.03125 = fieldNorm(doc=340)
      0.25 = coord(1/4)
    
    Content
    "OCLC unternimmt den ersten Schritt zur Anreicherung von WorldCat mit Linked Data und ergänzt jeden Eintrag in WorldCat.org mit Markups von Schema.org. Damit stellt WorldCat.org nun die größte Sammlung verlinkter bibliografischer Daten im Web dar. Mit den Markups von Schema.org zu allen in WorldCat.org auffindbaren Büchern, Zeitschriften und anderen bibliografischen Quellen werden die kompletten öffentlich zugänglichen Daten von WorldCat für Webcrawler wie Google und Bing nutzbar, die diese Metadaten in Suchindizes und anderen Anwendungen verwerten können. Kommerzielle Entwickler, die mit webbasierten Services arbeiten, suchen schon eine Weile nach Möglichkeiten zur Ausschöpfung des Potentials von Linked Data. Die Initiative Schema.org, die 2011 von Google, Bing und Yahoo! gegründet wurde, stellt ein Kernvokabular für Markup zur Verfügung, das Suchmaschinen und anderen Webtrawlern die Verwendung der darunter liegenden Daten erleichtert. OCLC arbeitet mit der Schema.org Community an der Entwicklung und Implementierung einer Erweiterung dieses Vokabulars für WorldCat-Daten. Schema.org und bibliotheksspezifische Erweiterungen schaffen eine wertvolle Brücke zwischen der Bibliothekswelt und dem Netz der Endverbraucher, dem sogenannten consumer web. Schema.org arbeitet mit einer Reihe anderer Branchen zusammen, um ähnliche Erweiterungen für andere spezifische Anwendungsfälle bereitzustellen. Die Möglichkeiten, die Linked Data der internationalen Bibliotheksgemeinschaft bietet, decken sich mit der Kernstrategie von OCLC, Bibliotheken auf dem Weg zu Web-Scale zu unterstützen. Die Anreicherung der WorldCat-Einträge durch Linked Data erhöht deren Nutzwert - vor allem für Suchmaschinen, Entwickler und Webdienste außerhalb der Bibliothekswelt. Das wird es Suchmaschinen erleichtern, nicht-bibliothekarische Organisationen mit Bibliotheksdaten zu verbinden.
    WorldCat wurde im Verlauf der vergangenen vier Jahrzehnte von tausenden teilnehmenden Bibliotheken aufgebaut und ist das weltweit größte Online-Verzeichnis von Bibliotheksbeständen. OCLC wird auch künftig mit der Bibliothekswelt und größeren Entwicklergemeinschaften zusammenarbeiten, um Linked Data-Projekte voranzutreiben. OCLC sieht in Schema.org eine zeitgemäße und wegweisende Entwicklung hin zur Einführung von Linked Data-Technologie, die erkennbare Vorteile für Bibliotheken erzielen wird. Als weiteren Beweis seiner Rolle als Bereitsteller verlinkter Bibliotheksdaten machte OCLC kürzlich den kompletten DDC 23-Satz als Linked Data verfügbar - das entspricht über 23.000 Notationen und Klassenbenennungen in englischer Sprache. OCLC setzt sich für die Stabilität und erhöhte Funktionalität von bibliografischen Daten als Linked Data ein. Es ist zu erwarten, dass sich das Markup im Lauf der kommenden Monate weiterentwickelt, während sich die Gemeinschaft auf einen Standard einigt. Die aktuelle Veröffentlichung sollte daher als experimentell verstanden werden. Änderungen sind zu erwarten. Das Linked Data-Release von WorldCat.org wird von OCLC im Rahmen der "Open Data Commons Attribution License" zur Verfügung gestellt."
  19. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.03
    0.02782737 = product of:
      0.11130948 = sum of:
        0.11130948 = product of:
          0.33392844 = sum of:
            0.33392844 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.33392844 = score(doc=1826,freq=2.0), product of:
                0.35649577 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042049456 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  20. Green, R.: Relational aspects of subject authority control : the contributions of classificatory structure (2015) 0.03
    0.02694875 = product of:
      0.107795 = sum of:
        0.107795 = sum of:
          0.050565217 = weight(_text_:language in 2282) [ClassicSimilarity], result of:
            0.050565217 = score(doc=2282,freq=4.0), product of:
              0.16497234 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.042049456 = queryNorm
              0.30650726 = fieldWeight in 2282, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2282)
          0.028744178 = weight(_text_:29 in 2282) [ClassicSimilarity], result of:
            0.028744178 = score(doc=2282,freq=2.0), product of:
              0.14791684 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.042049456 = queryNorm
              0.19432661 = fieldWeight in 2282, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2282)
          0.028485604 = weight(_text_:22 in 2282) [ClassicSimilarity], result of:
            0.028485604 = score(doc=2282,freq=2.0), product of:
              0.14725003 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042049456 = queryNorm
              0.19345059 = fieldWeight in 2282, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2282)
      0.25 = coord(1/4)
    
    Abstract
    The structure of a classification system contributes in a variety of ways to representing semantic relationships between its topics in the context of subject authority control. We explore this claim using the Dewey Decimal Classification (DDC) system as a case study. The DDC links its classes into a notational hierarchy, supplemented by a network of relationships between topics, expressed in class descriptions and in the Relative Index (RI). Topics/subjects are expressed both by the natural language text of the caption and notes (including Manual notes) in a class description and by the controlled vocabulary of the RI's alphabetic index, which shows where topics are treated in the classificatory structure. The expression of relationships between topics depends on paradigmatic and syntagmatic relationships between natural language terms in captions, notes, and RI terms; on the meaning of specific note types; and on references recorded between RI terms. The specific means used in the DDC for capturing hierarchical (including disciplinary), equivalence and associative relationships are surveyed.
    Date
    8.11.2015 21:27:22
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro

Languages

Types

  • a 1381
  • el 154
  • m 114
  • s 38
  • x 24
  • r 11
  • b 5
  • i 2
  • n 1
  • p 1
  • v 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications