Search (161 results, page 1 of 9)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.31
    0.3076201 = product of:
      0.6921452 = sum of:
        0.06921452 = product of:
          0.20764355 = sum of:
            0.20764355 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.20764355 = score(doc=306,freq=2.0), product of:
                0.31668055 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037353165 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.20764355 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20764355 = score(doc=306,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20764355 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20764355 = score(doc=306,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20764355 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20764355 = score(doc=306,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.44444445 = coord(4/9)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.29
    0.286123 = product of:
      0.5150214 = sum of:
        0.049438946 = product of:
          0.14831683 = sum of:
            0.14831683 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.14831683 = score(doc=1000,freq=2.0), product of:
                0.31668055 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037353165 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.14831683 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14831683 = score(doc=1000,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14831683 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14831683 = score(doc=1000,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.02063194 = weight(_text_:data in 1000) [ClassicSimilarity], result of:
          0.02063194 = score(doc=1000,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.14831683 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.14831683 = score(doc=1000,freq=2.0), product of:
            0.31668055 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037353165 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5555556 = coord(5/9)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.07
    0.07228309 = product of:
      0.21684925 = sum of:
        0.103783086 = weight(_text_:germany in 2192) [ClassicSimilarity], result of:
          0.103783086 = score(doc=2192,freq=4.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.46590203 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.065243915 = weight(_text_:data in 2192) [ClassicSimilarity], result of:
          0.065243915 = score(doc=2192,freq=20.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.5523875 = fieldWeight in 2192, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.047822252 = weight(_text_:processing in 2192) [ClassicSimilarity], result of:
          0.047822252 = score(doc=2192,freq=4.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.3162615 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
      0.33333334 = coord(3/9)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Text processing (Computer science)
    Subject
    Text processing (Computer science)
  4. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.07
    0.0679528 = product of:
      0.20385839 = sum of:
        0.1452963 = weight(_text_:germany in 3283) [ClassicSimilarity], result of:
          0.1452963 = score(doc=3283,freq=4.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.6522628 = fieldWeight in 3283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.040849157 = weight(_text_:data in 3283) [ClassicSimilarity], result of:
          0.040849157 = score(doc=3283,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.34584928 = fieldWeight in 3283, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.017712934 = product of:
          0.035425868 = sum of:
            0.035425868 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.035425868 = score(doc=3283,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  5. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 0.05
    0.047941297 = product of:
      0.14382389 = sum of:
        0.08806286 = weight(_text_:germany in 2646) [ClassicSimilarity], result of:
          0.08806286 = score(doc=2646,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.39533097 = fieldWeight in 2646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.046875 = fieldNorm(doc=2646)
        0.040578526 = weight(_text_:processing in 2646) [ClassicSimilarity], result of:
          0.040578526 = score(doc=2646,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.26835677 = fieldWeight in 2646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=2646)
        0.015182514 = product of:
          0.030365027 = sum of:
            0.030365027 = weight(_text_:22 in 2646) [ClassicSimilarity], result of:
              0.030365027 = score(doc=2646,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.23214069 = fieldWeight in 2646, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2646)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    The CACAO Project (Cross-language Access to Catalogues and Online Libraries) has been designed to implement natural language processing and cross-language information retrieval techniques to provide cross-language access to information in libraries, a critical issue in the linguistically diverse European Union. This project report addresses two metadata-related challenges for the library community in this context: "false friends" (identical words having different meanings in different languages) and term ambiguity. The possible solutions involve enriching the metadata with attributes specifying language or the source authority file, or associating potential search terms to classes in a classification system. The European Library will evaluate an early implementation of this work in late 2008.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.04
    0.03823903 = product of:
      0.17207563 = sum of:
        0.14677143 = weight(_text_:germany in 3278) [ClassicSimilarity], result of:
          0.14677143 = score(doc=3278,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.65888494 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.02530419 = product of:
          0.05060838 = sum of:
            0.05060838 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.05060838 = score(doc=3278,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.38690117 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  7. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.03
    0.033079453 = product of:
      0.09923836 = sum of:
        0.02243571 = weight(_text_:cataloging in 5903) [ClassicSimilarity], result of:
          0.02243571 = score(doc=5903,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.15240273 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.04332707 = weight(_text_:data in 5903) [ClassicSimilarity], result of:
          0.04332707 = score(doc=5903,freq=18.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.36682853 = fieldWeight in 5903, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.033475578 = weight(_text_:processing in 5903) [ClassicSimilarity], result of:
          0.033475578 = score(doc=5903,freq=4.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.22138305 = fieldWeight in 5903, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.33333334 = coord(3/9)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
    Content
    RDF Data Model At the core of RDF is a model for representing named properties and their values. These properties serve both to represent attributes of resources (and in this sense correspond to usual attribute-value-pairs) and to represent relationships between resources. The RDF data model is a syntax-independent way of representing RDF statements. RDF statements that are syntactically very different could mean the same thing. This concept of equivalence in meaning is very important when performing queries, aggregation and a number of other tasks at which RDF is aimed. The equivalence is defined in a clean machine understandable way. Two pieces of RDF are equivalent if and only if their corresponding data model representations are the same. Table of contents 1. Introduction 2. RDF Data Model 3. RDF Grammar 4. Signed RDF 5. Examples 6. Appendix A: Brief Explanation of XML Namespaces
  8. Folsom, S.M.: Using the Program for Cooperative Cataloging's past and present to project a Linked Data future (2020) 0.03
    0.03251961 = product of:
      0.14633824 = sum of:
        0.07252317 = weight(_text_:cataloging in 5747) [ClassicSimilarity], result of:
          0.07252317 = score(doc=5747,freq=4.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.49264002 = fieldWeight in 5747, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0625 = fieldNorm(doc=5747)
        0.07381507 = weight(_text_:data in 5747) [ClassicSimilarity], result of:
          0.07381507 = score(doc=5747,freq=10.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.6249551 = fieldWeight in 5747, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5747)
      0.22222222 = coord(2/9)
    
    Abstract
    Drawing on the PCC's history with linked data and related work this article identifies and gives context to pressing areas PCC will need to focus on moving forward. These areas include defining plausible data targets, tractable implementation models and data flows, engaging in related tool development, and participating in the broader linked data community.
    Footnote
    Beitrag in einem Themenheft: 'Program for Cooperative Cataloging (PCC): 25 Years Strong and Growing!'.
    Source
    Cataloging and classification quarterly. 58(2020) no.3/4, S.464-471
  9. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.03
    0.03250687 = product of:
      0.097520605 = sum of:
        0.025640812 = weight(_text_:cataloging in 1096) [ClassicSimilarity], result of:
          0.025640812 = score(doc=1096,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.17417455 = fieldWeight in 1096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.03125 = fieldNorm(doc=1096)
        0.061758116 = weight(_text_:data in 1096) [ClassicSimilarity], result of:
          0.061758116 = score(doc=1096,freq=28.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.52287495 = fieldWeight in 1096, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1096)
        0.010121676 = product of:
          0.020243352 = sum of:
            0.020243352 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.020243352 = score(doc=1096,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
  10. Naun, C.C.: Expanding the use of Linked Data value vocabularies in PCC cataloging (2020) 0.03
    0.03162395 = product of:
      0.14230776 = sum of:
        0.077719584 = weight(_text_:cataloging in 123) [ClassicSimilarity], result of:
          0.077719584 = score(doc=123,freq=6.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.52793854 = fieldWeight in 123, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0546875 = fieldNorm(doc=123)
        0.06458818 = weight(_text_:data in 123) [ClassicSimilarity], result of:
          0.06458818 = score(doc=123,freq=10.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.5468357 = fieldWeight in 123, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=123)
      0.22222222 = coord(2/9)
    
    Abstract
    In 2015, the PCC Task Group on URIs in MARC was tasked to identify and address linked data identifiers deployment in the current MARC format. By way of a pilot test, a survey, MARC Discussion papers, Proposals, etc., the Task Group initiated and introduced changes to MARC encoding. The Task Group succeeded in laying the ground work for preparing library data transition from MARC data to a linked data, RDF environment.
    Footnote
    Beitrag in einem Themenheft: 'Program for Cooperative Cataloging (PCC): 25 Years Strong and Growing!'.
    Source
    Cataloging and classification quarterly. 58(2020) no.3/4, S.449-457
  11. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.03
    0.030402552 = product of:
      0.09120765 = sum of:
        0.032051016 = weight(_text_:cataloging in 1962) [ClassicSimilarity], result of:
          0.032051016 = score(doc=1962,freq=2.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.21771818 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.04126388 = weight(_text_:data in 1962) [ClassicSimilarity], result of:
          0.04126388 = score(doc=1962,freq=8.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.34936053 = fieldWeight in 1962, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.017892765 = product of:
          0.03578553 = sum of:
            0.03578553 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.03578553 = score(doc=1962,freq=4.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Cataloging and classification quarterly. 52(2014) no.1, S.90-101
  12. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.03
    0.03010867 = product of:
      0.13548902 = sum of:
        0.077719584 = weight(_text_:cataloging in 125) [ClassicSimilarity], result of:
          0.077719584 = score(doc=125,freq=6.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.52793854 = fieldWeight in 125, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.057769425 = weight(_text_:data in 125) [ClassicSimilarity], result of:
          0.057769425 = score(doc=125,freq=8.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.48910472 = fieldWeight in 125, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
      0.22222222 = coord(2/9)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
    Source
    Cataloging and classification quarterly. 58(2020) no.5, S.473-485
  13. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.03
    0.029249936 = product of:
      0.13162471 = sum of:
        0.102740005 = weight(_text_:germany in 604) [ClassicSimilarity], result of:
          0.102740005 = score(doc=604,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.46121946 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.028884713 = weight(_text_:data in 604) [ClassicSimilarity], result of:
          0.028884713 = score(doc=604,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24455236 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.22222222 = coord(2/9)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  14. Zumer, M.; Zeng, M.L.; Salaba, A.: FRSAD: conceptual modeling of aboutness (2012) 0.03
    0.028454656 = product of:
      0.12804595 = sum of:
        0.06345777 = weight(_text_:cataloging in 1960) [ClassicSimilarity], result of:
          0.06345777 = score(doc=1960,freq=4.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.43106002 = fieldWeight in 1960, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1960)
        0.06458818 = weight(_text_:data in 1960) [ClassicSimilarity], result of:
          0.06458818 = score(doc=1960,freq=10.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.5468357 = fieldWeight in 1960, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1960)
      0.22222222 = coord(2/9)
    
    Abstract
    This book offers the first comprehensive exploration of the development and use of the International Federation of Library Association's newly released model for subject authority data, covering everything from the rationale for creating the model to practical steps for implementing it.
    Footnote
    Rez. in: Cataloging and classification quarterly 52(2014) no.3, S.343-346 (T. Brenndorfer)
    RSWK
    Datenmodell / Functional Requirements for Subject Authority Data
    Functional Requirements for Subject Authority Data / Inhaltserschließung
    Series
    Third millennium cataloging
    Subject
    Datenmodell / Functional Requirements for Subject Authority Data
    Functional Requirements for Subject Authority Data / Inhaltserschließung
  15. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.03
    0.027401036 = product of:
      0.082203105 = sum of:
        0.035735566 = weight(_text_:data in 4607) [ClassicSimilarity], result of:
          0.035735566 = score(doc=4607,freq=6.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.30255508 = fieldWeight in 4607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.03381544 = weight(_text_:processing in 4607) [ClassicSimilarity], result of:
          0.03381544 = score(doc=4607,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.22363065 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.012652095 = product of:
          0.02530419 = sum of:
            0.02530419 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.02530419 = score(doc=4607,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  16. Manguinhas, H.; Charles, V.; Isaac, A.; Miles, T.; Lima, A.; Neroulidis, A.; Ginouves, V.; Atsidis, D.; Hildebrand, M.; Brinkerink, M.; Gordea, S.: Linking subject labels in cultural heritage metadata to MIMO vocabulary using CultuurLink (2016) 0.03
    0.027350316 = product of:
      0.123076424 = sum of:
        0.08806286 = weight(_text_:germany in 3107) [ClassicSimilarity], result of:
          0.08806286 = score(doc=3107,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.39533097 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
        0.03501356 = weight(_text_:data in 3107) [ClassicSimilarity], result of:
          0.03501356 = score(doc=3107,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.29644224 = fieldWeight in 3107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
      0.22222222 = coord(2/9)
    
    Abstract
    The Europeana Sounds project aims to increase the amount of cultural audio content in Europeana. It also strongly focuses on enriching the metadata records that are aggregated by Europeana. To provide metadata to Europeana, Data Providers are asked to convert their records from the format and model they use internally to a specific profile of the Europeana Data Model (EDM) for sound resources. These metadata include subjects, which typically use a vocabulary internal to each partner.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  17. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.03
    0.02676732 = product of:
      0.12045294 = sum of:
        0.102740005 = weight(_text_:germany in 2618) [ClassicSimilarity], result of:
          0.102740005 = score(doc=2618,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.46121946 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2618)
        0.017712934 = product of:
          0.035425868 = sum of:
            0.035425868 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
              0.035425868 = score(doc=2618,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.2708308 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2618)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Jahns, Y.: 20 years SWD : German subject authority data prepared for the future (2011) 0.03
    0.025071375 = product of:
      0.112821184 = sum of:
        0.08806286 = weight(_text_:germany in 1802) [ClassicSimilarity], result of:
          0.08806286 = score(doc=1802,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.39533097 = fieldWeight in 1802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.046875 = fieldNorm(doc=1802)
        0.024758326 = weight(_text_:data in 1802) [ClassicSimilarity], result of:
          0.024758326 = score(doc=1802,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 1802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1802)
      0.22222222 = coord(2/9)
    
    Abstract
    The German subject headings authority file - SWD - provides a terminologically controlled vocabulary, covering all fields of knowledge. The subject headings are determined by the German Rules for the Subject Catalogue. The authority file is produced and updated daily by participating libraries from around Germany, Austria and Switzerland. Over the last twenty years, it grew to an online-accessible database with about 550.000 headings. They are linked to other thesauri, also to French and English equivalents and with notations of the Dewey Decimal Classification. Thus, it allows multilingual access and searching in dispersed, heterogeneously indexed catalogues. The vocabulary is not only used for cataloguing library materials, but also web-resources and objects in archives and museums.
  19. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.02
    0.024571309 = product of:
      0.11057089 = sum of:
        0.04532698 = weight(_text_:cataloging in 5478) [ClassicSimilarity], result of:
          0.04532698 = score(doc=5478,freq=4.0), product of:
            0.14721331 = queryWeight, product of:
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.037353165 = queryNorm
            0.3079 = fieldWeight in 5478, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9411201 = idf(docFreq=2334, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.065243915 = weight(_text_:data in 5478) [ClassicSimilarity], result of:
          0.065243915 = score(doc=5478,freq=20.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.5523875 = fieldWeight in 5478, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
      0.22222222 = coord(2/9)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Source
    Cataloging and classification quarterly. 57(2019) no.5, S.261-277
  20. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.02
    0.022720853 = product of:
      0.10224384 = sum of:
        0.068428405 = weight(_text_:data in 4532) [ClassicSimilarity], result of:
          0.068428405 = score(doc=4532,freq=22.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.5793489 = fieldWeight in 4532, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.03381544 = weight(_text_:processing in 4532) [ClassicSimilarity], result of:
          0.03381544 = score(doc=4532,freq=2.0), product of:
            0.15121111 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.037353165 = queryNorm
            0.22363065 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
      0.22222222 = coord(2/9)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.

Years

Languages

  • e 139
  • d 20

Types

  • a 106
  • el 54
  • m 14
  • s 7
  • x 6
  • n 2
  • p 2
  • r 2
  • More… Less…

Subjects