Search (194 results, page 1 of 10)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.62
    0.61708015 = product of:
      1.1570252 = sum of:
        0.058629856 = product of:
          0.17588957 = sum of:
            0.17588957 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.17588957 = score(doc=306,freq=2.0), product of:
                0.26825202 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031640913 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.04305802 = weight(_text_:evaluation in 306) [ClassicSimilarity], result of:
          0.04305802 = score(doc=306,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.32441732 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17588957 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17588957 = score(doc=306,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.53333336 = coord(8/15)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.49
    0.4934142 = product of:
      0.82235694 = sum of:
        0.04187847 = product of:
          0.1256354 = sum of:
            0.1256354 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1256354 = score(doc=1000,freq=2.0), product of:
                0.26825202 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031640913 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.01609953 = score(doc=1000,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.01861633 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.01861633 = score(doc=1000,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.6 = coord(9/15)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Linked data and user interaction : the road ahead (2015) 0.02
    0.019286105 = product of:
      0.14464578 = sum of:
        0.095391594 = sum of:
          0.022768175 = weight(_text_:online in 2552) [ClassicSimilarity], result of:
            0.022768175 = score(doc=2552,freq=4.0), product of:
              0.096027054 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031640913 = queryNorm
              0.23710167 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2552)
          0.07262342 = weight(_text_:recherche in 2552) [ClassicSimilarity], result of:
            0.07262342 = score(doc=2552,freq=4.0), product of:
              0.17150146 = queryWeight, product of:
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.031640913 = queryNorm
              0.42345655 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2552)
        0.04925418 = weight(_text_:web in 2552) [ClassicSimilarity], result of:
          0.04925418 = score(doc=2552,freq=14.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.47698978 = fieldWeight in 2552, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
      0.13333334 = coord(2/15)
    
    Abstract
    This collection of research papers provides extensive information on deploying services, concepts, and approaches for using open linked data from libraries and other cultural heritage institutions. With a special emphasis on how libraries and other cultural heritage institutions can create effective end user interfaces using open, linked data or other datasets. These papers are essential reading for any one interesting in user interface design or the semantic web.
    Content
    H. Frank Cervone: Linked data and user interaction : an introduction -- Paola Di Maio: Linked Data Beyond Libraries Towards Universal Interfaces and Knowledge Unification -- Emmanuelle Bermes: Following the user's flow in the Digital Pompidou -- Patrick Le Bceuf: Customized OPACs on the Semantic Web : the OpenCat prototype -- Ryan Shaw, Patrick Golden and Michael Buckland: Using linked library data in working research notes -- Timm Heuss, Bernhard Humm.Tilman Deuschel, Torsten Frohlich, Thomas Herth and Oliver Mitesser: Semantically guided, situation-aware literature research -- Niklas Lindstrom and Martin Malmsten: Building interfaces on a networked graph -- Natasha Simons, Arve Solland and Jan Hettenhausen: Griffith Research Hub. Vgl.: http://d-nb.info/1032799889.
    LCSH
    Semantic Web
    RSWK
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Subject
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Semantic Web
    Theme
    Semantic Web
  4. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.02
    0.019057466 = product of:
      0.09528732 = sum of:
        0.019256605 = weight(_text_:software in 5903) [ClassicSimilarity], result of:
          0.019256605 = score(doc=5903,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.15340936 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.039094288 = weight(_text_:web in 5903) [ClassicSimilarity], result of:
          0.039094288 = score(doc=5903,freq=18.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.37859887 = fieldWeight in 5903, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.036936436 = weight(_text_:site in 5903) [ClassicSimilarity], result of:
          0.036936436 = score(doc=5903,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.21246605 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.2 = coord(3/15)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
    Theme
    Semantic Web
  5. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.02
    0.01904594 = product of:
      0.0952297 = sum of:
        0.011269671 = product of:
          0.022539342 = sum of:
            0.022539342 = weight(_text_:online in 4184) [ClassicSimilarity], result of:
              0.022539342 = score(doc=4184,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23471867 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
        0.06895585 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
          0.06895585 = score(doc=4184,freq=14.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.6677857 = fieldWeight in 4184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.015004174 = product of:
          0.030008348 = sum of:
            0.030008348 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.030008348 = score(doc=4184,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  6. Altenhöner, R; Hengel, C.; Jahns, Y.; Junger, U.; Mahnke, C.; Oehlschläger, S.; Werner, C.: Weltkongress Bibliothek und Information, 74. IFLA-Generalkonferenz in Quebec, Kanada : Aus den Veranstaltungen der Division IV Bibliographic Control, der Core Activities ICADS und UNIMARC sowie der Information Technology Section (2008) 0.02
    0.017189128 = product of:
      0.08594564 = sum of:
        0.05025431 = sum of:
          0.013942603 = weight(_text_:online in 2317) [ClassicSimilarity], result of:
            0.013942603 = score(doc=2317,freq=6.0), product of:
              0.096027054 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031640913 = queryNorm
              0.14519453 = fieldWeight in 2317, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2317)
          0.03631171 = weight(_text_:recherche in 2317) [ClassicSimilarity], result of:
            0.03631171 = score(doc=2317,freq=4.0), product of:
              0.17150146 = queryWeight, product of:
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.031640913 = queryNorm
              0.21172827 = fieldWeight in 2317, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2317)
        0.009308165 = weight(_text_:web in 2317) [ClassicSimilarity], result of:
          0.009308165 = score(doc=2317,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.09014259 = fieldWeight in 2317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2317)
        0.026383169 = weight(_text_:site in 2317) [ClassicSimilarity], result of:
          0.026383169 = score(doc=2317,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.15176146 = fieldWeight in 2317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2317)
      0.2 = coord(3/15)
    
    Abstract
    Der 74. Weltkongress der International Federation of Library Associations and Institutions (IFLA) hat vom 10. bis 14. August 2008 unter dem Motto "Libraries without borders: Navigating towards global understanding" in Quebec, Kanada, stattgefunden. Dort trafen sich mehr als 3000 Bibliothekarinnen und Bibliothekare aus 150 Ländern der Welt zu insgesamt 224 Veranstaltungen und verschiedenen Satellitenkonferenzen. Die IFLA-Präsidentin Prof. Dr. Claudia Lux aus Berlin leitete die Tagung und war gleichzeitig prominenteste Vertreterin der deutschen Delegation, die aus mehr als 80 Kolleginnen und Kollegen bestand und damit im Vergleich zum Vorjahr erfreulich groß war. Wer nicht dabei sein konnte und sich dennoch einen Eindruck über die Konferenz und die Atmosphäre verschaffen möchte, kann dies online tun. Neben dem Programm und einer Vielzahl von Vorträgen sind auf der Website der IFLA auch Links zu Fotos, Videos und Blogs vorhanden. Innerhalb der IFLA wird derzeit an einer Neuorganisation und damit verbunden einer neuen Satzung gearbeitet, unter anderem sollen auch der interne und externe Informationsfluss verbessert werden. Dazu soll Anfang 2009 eine neu gestaltete Website mit einem Content Managementsystem frei geschaltet werden. Das Design der neuen Site wurde in Quebec vorgestellt, eine Präsentation ist im IFLAnet zu finden. Wie in den vergangenen Jahren soll auch in diesem Jahr über die Veranstaltungen der Division IV Bibliographic Control mit ihren Sektionen Bibliography, Cataloguing, Classification and Indexing sowie Knowledge Managament berichtet werden.
    Content
    Classification and Indexing Section (Sektion Klassifikation und Indexierung) Deutsches Mitglied im Ständigen Ausschuss der Sektion: Yvonne Jahns (2005-2009; Deutsche Nationalbibliothek) Die Sektion, die sich als Forum für den internationalen Austausch über Methoden der Inhaltserschließung und die Bedeutung des sachlichen Zugangs zu Dokumenten und Wissen versteht, wartete in Quebec mit einem interessanten Vortragprogramm auf. Drei Präsentationen näherten sich dem Thema "Classification and indexing without language borders" von unterschiedlichen Seiten. Anila Angjeli von der Bibliotheque nationale de France (BnF) präsentierte Arbeiten aus dem Projekt STITCH", das sich mit semantischen Suchen in unterschiedlich erschlossenen Beständen von Kulturerbeinstitutionen beschäftigt. Die verwendeten Thesauri und Klassifikationen wurden mittels SKOS in ein vergleichbares Format überführt und stehen so als Semantic-Web-Anwendung zur Recherche bereit. Die Funktionsweise erläuterte Anila Angjeli sehr bildreich durch Beispiel-suchen nach mittelalterlichen Handschriften der BnF und der Königlichen Bibliothek der Niederlande. Vivien Petras vom GESIS Informationszentrum Sozialwissenschaften, Bonn, sprach über die Vielzahl der intellektuell erstellten Crosskonkordanzen zwischen Thesauri in den Sozialwissenschaften. Sie stellte dabei die Evaluierung der KOMOHE-Projektergebnisse vor und konnte die Verbesserung der Suchergebnisse anschaulich machen, die man durch Hinzuziehen der Konkordanzen in der Recherche über heterogen erschlossene Bestände erreicht. Schließlich präsentierte Michael Kreyche von der Kent State University, Ohio/USA, seinen eindrucksvollen jahrelangen Einsatz für die Zugänglichkeit englisch-spanischer Schlagwörter. Im Projekt Icsh-es.org gelang es, viele Vorarbeiten von amerikanischen und spanischen Bibliotheken zusammenzutragen, um eine Datenbank spanischer Entsprechungen der Library of Congress Subject Headings aufzubauen. Diese soll Indexierern helfen und natürlich den vielen spanisch-sprachigen Bibliotheksbenutzern in den USA zugute kommen. Spanisch ist nicht nur eine der meistgesprochenen Sprachen der Welt, sondern aufgrund der zahlreichen Einwanderer in die USA für die Bibliotheksarbeit von großer Wichtigkeit.
    Am Programm für den nächsten Weltkongress arbeitet der Ständige Ausschuss bereits. Es steht unter dem Motto "Foundations to Build Future Subject Access". Geplant ist auch eine Satellitenkonferenz, die am 20. und 21. August 2009 in Florenz unter dem Titel "Past Lessons, Future Challenges in Subject Access" stattfindet, zu der alle an Klassifikationen und Indexierungsprozessen Interessierte herzlich eingeladen sind. Die Arbeitsgruppen der Sektion trafen sich in Quebec, konnten jedoch bislang keine abschließenden Ergebnisse vorlegen. So sind bisher weder die Richtlinien für multilinguale Thesauri redigiert und publiziert, noch ist mit dem Erscheinen der Richtlinien für Sacherschließungsdaten in Nationalbibliografien vor 2009 zu rechnen. Die Teilnehmer verständigten sich darauf, dass die Weiterarbeit an einem multilingualen Wörterbuch zur Katalogisierung im Zeitalter von FRBR und RDA wichtiger denn je ist. Nach dem Release der neuen IFLA-Website soll dieses Online-Nachschlagewerk auf den Webseiten eine Heimat finden und wartet auf die Mitarbeit von Katalogisierern aus aller Welt. Die Arbeitsgruppe zu den Functional Requirements for Subject Authority Records (FRSAR) traf sich 2008 mehrfach und stellte zuletzt während der Konferenz der International Society of Knowledge Organization (ISKO) in Montreal ihre Arbeitsergebnisse zur Diskussion. Leider sind keine aktuellen Papiere zu den FRSAR online verfügbar. Ein internationales Stellungnahmeverfahren zu dem Modell zu Katalogisaten von Themen von Werken im Rahmen des FRBR-Modells kann jedoch 2009 erwartet werden. Mehr Informationen dazu gibt es z.B. in der neuen Publikation "New Perspectives on Subject Indexing and Classification", einer Gedenkschrift für die verstorbene Kollegin und ehemaliges Mitglied des Ständigen Ausschusses, Magda Heiner-Freiling. Die Idee dazu entstand während des IFLA-Kongresses in Durban. Dank der zahlreichen Beitragenden aus aller Welt gelang es, im Laufe des vergangenen Jahres eine interessante Sammlung rund um die Themen DDC, verbale Sacherschließung, Terminologiearbeit und multilinguale sachliche Suchen zusammenzustellen.
  7. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.01
    0.013369694 = product of:
      0.06684847 = sum of:
        0.019256605 = weight(_text_:software in 4205) [ClassicSimilarity], result of:
          0.019256605 = score(doc=4205,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.15340936 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.02152901 = weight(_text_:evaluation in 4205) [ClassicSimilarity], result of:
          0.02152901 = score(doc=4205,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.16220866 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.026062861 = weight(_text_:web in 4205) [ClassicSimilarity], result of:
          0.026062861 = score(doc=4205,freq=8.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25239927 = fieldWeight in 4205, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
      0.2 = coord(3/15)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  8. Proceedings of the 2nd International Workshop on Evaluation of Ontology-based Tools (2004) 0.01
    0.012067086 = product of:
      0.09050314 = sum of:
        0.053270485 = weight(_text_:evaluation in 3152) [ClassicSimilarity], result of:
          0.053270485 = score(doc=3152,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.40136236 = fieldWeight in 3152, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3152)
        0.03723266 = weight(_text_:web in 3152) [ClassicSimilarity], result of:
          0.03723266 = score(doc=3152,freq=8.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.36057037 = fieldWeight in 3152, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3152)
      0.13333334 = coord(2/15)
    
    Content
    Table of Contents Part I: Accepted Papers Christoph Tempich and Raphael Volz: Towards a benchmark for Semantic Web reasoners - an analysis of the DAML ontology library M. Carmen Suarez-Figueroa and Asuncion Gomez-Perez: Results of Taxonomic Evaluation of RDF(S) and DAML+OIL ontologies using RDF(S) and DAML+OIL Validation Tools and Ontology Platforms import services Volker Haarslev and Ralf Möller: Racer: A Core Inference Engine for the Semantic Web Mikhail Kazakov and Habib Abdulrab: DL-workbench: a metamodeling approach to ontology manipulation Thorsten Liebig and Olaf Noppens: OntoTrack: Fast Browsing and Easy Editing of Large Ontologie Frederic Fürst, Michel Leclere, and Francky Trichet: TooCoM : a Tool to Operationalize an Ontology with the Conceptual Graph Model Naoki Sugiura, Masaki Kurematsu, Naoki Fukuta, Noriaki Izumi, and Takahira Yamaguchi: A domain ontology engineering tool with general ontologies and text corpus Howard Goldberg, Alfredo Morales, David MacMillan, and Matthew Quinlan: An Ontology-Driven Application to Improve the Prescription of Educational Resources to Parents of Premature Infants Part II: Experiment Contributions Domain natural language description for the experiment Raphael Troncy, Antoine Isaac, and Veronique Malaise: Using XSLT for Interoperability: DOE and The Travelling Domain Experiment Christian Fillies: SemTalk EON2003 Semantic Web Export / Import Interface Test Óscar Corcho, Asunción Gómez-Pérez, Danilo José Guerrero-Rodríguez, David Pérez-Rey, Alberto Ruiz-Cristina, Teresa Sastre-Toral, M. Carmen Suárez-Figueroa: Evaluation experiment of ontology tools' interoperability with the WebODE ontology engineering workbench Holger Knublauch: Case Study: Using Protege to Convert the Travel Ontology to UML and OWL Franz Calvo and John Gennari: Interoperability of Protege 2.0 beta and OilEd 3.5 in the Domain Knowledge of Osteoporosis
    Theme
    Semantic Web
  9. Kempf, A.O.; Ritze, D.; Eckert, K.; Zapilko, B.: New ways of mapping knowledge organization systems : using a semi-automatic matching procedure for building up vocabulary crosswalks (2014) 0.01
    0.011484365 = product of:
      0.057421822 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 1371) [ClassicSimilarity], result of:
              0.01609953 = score(doc=1371,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 1371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1371)
          0.5 = coord(1/2)
        0.030755727 = weight(_text_:evaluation in 1371) [ClassicSimilarity], result of:
          0.030755727 = score(doc=1371,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
        0.01861633 = weight(_text_:web in 1371) [ClassicSimilarity], result of:
          0.01861633 = score(doc=1371,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1371)
      0.2 = coord(3/15)
    
    Abstract
    Crosswalks between different vocabularies are an indispensable prerequisite for integrated, high-quality search scenarios in distributed data environments where more than one controlled vocabulary is in use. Offered through the web and linked with each other they act as a central link so that users can move back and forth between different online data sources. In the past, crosswalks between different thesauri have usually been developed manually. In the long run the intellectual updating of such crosswalks is expensive. An obvious solution would be to apply automatic matching procedures, such as the so-called ontology matching tools. On the basis of computer-generated correspondences between the Thesaurus for the Social Sciences (TSS) and the Thesaurus for Economics (STW), our contribution explores the trade-off between IT-assisted tools and procedures on the one hand and external quality evaluation by domain experts on the other hand. This paper presents techniques for semi-automatic development and maintenance of vocabulary crosswalks. The performance of multiple matching tools was first evaluated against a reference set of correct mappings, then the tools were used to generate new mappings. It was concluded that the ontology matching tools can be used effectively to speed up the work of domain experts. By optimizing the workflow, the method promises to facilitate sustained updating of high-quality vocabulary crosswalks.
  10. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.01
    0.011368607 = product of:
      0.056843035 = sum of:
        0.027509436 = weight(_text_:software in 3628) [ClassicSimilarity], result of:
          0.027509436 = score(doc=3628,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.21915624 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.01861633 = weight(_text_:web in 3628) [ClassicSimilarity], result of:
          0.01861633 = score(doc=3628,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.021434534 = score(doc=3628,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  11. Mayr, P.: Information Retrieval-Mehrwertdienste für Digitale Bibliotheken: : Crosskonkordanzen und Bradfordizing (2010) 0.01
    0.011067429 = product of:
      0.08300571 = sum of:
        0.030811504 = product of:
          0.061623007 = sum of:
            0.061623007 = weight(_text_:recherche in 4910) [ClassicSimilarity], result of:
              0.061623007 = score(doc=4910,freq=2.0), product of:
                0.17150146 = queryWeight, product of:
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.031640913 = queryNorm
                0.35931477 = fieldWeight in 4910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4910)
          0.5 = coord(1/2)
        0.052194204 = weight(_text_:evaluation in 4910) [ClassicSimilarity], result of:
          0.052194204 = score(doc=4910,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.3932532 = fieldWeight in 4910, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
      0.13333334 = coord(2/15)
    
    Abstract
    In dieser Arbeit werden zwei Mehrwertdienste für Suchsysteme vorgestellt, die typische Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden in diesem Buch ausführlich beschrieben und evaluiert. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Ergebnisse dieser Arbeit sind in das GESIS-Projekt IRM eingeflossen.
    RSWK
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
    Subject
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
  12. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.01095121 = product of:
      0.08213407 = sum of:
        0.052125722 = weight(_text_:web in 8365) [ClassicSimilarity], result of:
          0.052125722 = score(doc=8365,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.50479853 = fieldWeight in 8365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.109375 = fieldNorm(doc=8365)
        0.030008348 = product of:
          0.060016695 = sum of:
            0.060016695 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.060016695 = score(doc=8365,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Date
    22. 6.2015 16:08:38
  13. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.01
    0.010298806 = product of:
      0.051494032 = sum of:
        0.006439812 = product of:
          0.012879624 = sum of:
            0.012879624 = weight(_text_:online in 168) [ClassicSimilarity], result of:
              0.012879624 = score(doc=168,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.13412495 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
        0.03648041 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.03648041 = score(doc=168,freq=12.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.008573813 = product of:
          0.017147627 = sum of:
            0.017147627 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.017147627 = score(doc=168,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    Footnote
    Online-Ausg.: Ontology Matching
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  14. Kempf, A.O.; Zapilko, B.: Normdatenpflege in Zeiten der Automatisierung : Erstellung und Evaluation automatisch aufgebauter Thesaurus-Crosskonkordanzen (2013) 0.01
    0.010238041 = product of:
      0.0767853 = sum of:
        0.06392458 = weight(_text_:evaluation in 1021) [ClassicSimilarity], result of:
          0.06392458 = score(doc=1021,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.48163486 = fieldWeight in 1021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=1021)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 1021) [ClassicSimilarity], result of:
              0.02572144 = score(doc=1021,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 1021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1021)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Thesaurus-Crosskonkordanzen bilden eine wichtige Voraussetzung für die integrierte Suche in einer verteilten Datenstruktur. Ihr Aufbau erfordert allerdings erhebliche personelle Ressourcen. Der vorliegende Beitrag liefert Evaluationsergebnisse des Library Track 2012 der Ontology Alignment Evaluation Initiative (OAEI), in dem Crosskonkordanzen zwischen dem Thesaurus Sozialwissenschaften (TheSoz) und dem Standard Thesaurus Wirtschaft (STW) erstmals automatisch erstellt wurden. Die Evaluation weist auf deutliche Unterschiede in den getesteten Matching- Tools hin und stellt die qualitativen Unterschiede einer automatisch im Vergleich zu einer intellektuell erstellten Crosskonkordanz heraus. Die Ergebnisse sprechen für einen Einsatz automatisch generierter Thesaurus-Crosskonkordanzen, um Domänenexperten eine maschinell erzeugte Vorselektion von möglichen Äquivalenzrelationen anzubieten.
    Date
    18. 8.2013 12:53:22
  15. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.010037424 = product of:
      0.05018712 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 4379) [ClassicSimilarity], result of:
              0.019319436 = score(doc=4379,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
        0.022339594 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.022339594 = score(doc=4379,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.018187806 = product of:
          0.036375612 = sum of:
            0.036375612 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.036375612 = score(doc=4379,freq=4.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
    Theme
    Klassifikationssysteme im Online-Retrieval
  16. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.01
    0.009747992 = product of:
      0.07310994 = sum of:
        0.027509436 = weight(_text_:software in 2661) [ClassicSimilarity], result of:
          0.027509436 = score(doc=2661,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.21915624 = fieldWeight in 2661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2661)
        0.045600507 = weight(_text_:web in 2661) [ClassicSimilarity], result of:
          0.045600507 = score(doc=2661,freq=12.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.4416067 = fieldWeight in 2661, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2661)
      0.13333334 = coord(2/15)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.
  17. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.009338402 = product of:
      0.07003801 = sum of:
        0.05717729 = weight(_text_:software in 4820) [ClassicSimilarity], result of:
          0.05717729 = score(doc=4820,freq=6.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.4555077 = fieldWeight in 4820, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.02572144 = score(doc=4820,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  18. Balakrishnan, U.; Krausz, A,; Voss, J.: Cocoda - ein Konkordanztool für bibliothekarische Klassifikationssysteme (2015) 0.01
    0.009270839 = product of:
      0.06953129 = sum of:
        0.027509436 = weight(_text_:software in 2030) [ClassicSimilarity], result of:
          0.027509436 = score(doc=2030,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.21915624 = fieldWeight in 2030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2030)
        0.042021852 = product of:
          0.084043704 = sum of:
            0.084043704 = weight(_text_:analyse in 2030) [ClassicSimilarity], result of:
              0.084043704 = score(doc=2030,freq=6.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.50413394 = fieldWeight in 2030, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2030)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Cocoda (Colibri Concordance Database for library classification systems) ist ein semi-automatisches, webbasiertes Tool für die Erstellung und Verwaltung von Konkordanzen zwischen bibliothekarischen Klassifikationssystemen. Das Tool wird im Rahmen des Teilprojektes "coli-conc" (Colibri-Konkordanzerstellung) des VZG-Projektes Colibri/DDC als eine Open-Source-Software an der Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG) entwickelt. Im Fokus des Projektes "coli-conc" steht zunächst die Konkordanzbildung zwischen der Dewey Dezimal Klassifikation (DDC) und der Regensburger Verbundklassifikation (RVK). Die inhärenten strukturellen und kulturellen Unterschiede von solch fein gegliederten bibliothekarischen Klassifikationssystemen machen den Konkordanzerstellungsprozess bei rein intellektuellem Ansatz aufwendig und zeitraubend. Um diesen zu vereinfachen und zu beschleunigen, werden die intellektuellen Schritte, die im Teilprojekt "coli-conc" eingesetzt werden, z. T. vom Konkordanztool "Cocoda" automatisch durchgeführt. Die von Cocoda erzeugten Konkordanz-Vorschläge stammen sowohl aus der automatischen Analyse der vergebenen Notationen in den Titeldatensätzen verschiedener Datenbanken als auch aus der vergleichenden Analyse der Begriffswelt der Klassifikationssysteme. Ferner soll "Cocoda" als eine Plattform für die Speicherung, Bereitstellung und Analyse von Konkordanzen dienen, um die Nutzungseffizienz der Konkordanzen zu erhöhen. In dieser Präsentation wird zuerst das Konkordanzprojekt "coli-conc", das die Basis des Konkordanztools "Cocoda" bildet, vorgestellt. Danach werden Algorithmus, Benutzeroberfläche und technische Details des Tools dargelegt. Anhand von Beispielen wird der Konkordanzerstellungsprozess mit Cocoda demonstriert.
  19. Mayr, P.; Zapilko, B.; Sure, Y.: ¬Ein Mehr-Thesauri-Szenario auf Basis von SKOS und Crosskonkordanzen (2010) 0.01
    0.00926731 = product of:
      0.06950482 = sum of:
        0.030811504 = product of:
          0.061623007 = sum of:
            0.061623007 = weight(_text_:recherche in 3392) [ClassicSimilarity], result of:
              0.061623007 = score(doc=3392,freq=2.0), product of:
                0.17150146 = queryWeight, product of:
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.031640913 = queryNorm
                0.35931477 = fieldWeight in 3392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3392)
          0.5 = coord(1/2)
        0.038693316 = weight(_text_:web in 3392) [ClassicSimilarity], result of:
          0.038693316 = score(doc=3392,freq=6.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.37471575 = fieldWeight in 3392, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3392)
      0.13333334 = coord(2/15)
    
    Abstract
    Im August 2009 wurde SKOS "Simple Knowledge Organization System" als neuer Standard für web-basierte kontrollierte Vokabulare durch das W3C veröffentlicht1. SKOS dient als Datenmodell, um kontrollierte Vokabulare über das Web anzubieten sowie technisch und semantisch interoperabel zu machen. Perspektivisch kann die heterogene Landschaft der Erschließungsvokabulare über SKOS vereinheitlicht und vor allem die Inhalte der klassischen Datenbanken (Bereich Fachinformation) für Anwendungen des Semantic Web, beispielsweise als Linked Open Data2 (LOD), zugänglich und stär-ker miteinander vernetzt werden. Vokabulare im SKOS-Format können dabei eine relevante Funktion einnehmen, indem sie als standardisiertes Brückenvokabular dienen und semantische Verlinkung zwischen erschlossenen, veröffentlichten Daten herstellen. Die folgende Fallstudie skizziert ein Szenario mit drei thematisch verwandten Thesauri, die ins SKOS-Format übertragen und inhaltlich über Crosskonkordanzen aus dem Projekt KoMoHe verbunden werden. Die Mapping Properties von SKOS bieten dazu standardisierte Relationen, die denen der Crosskonkordanzen entsprechen. Die beteiligten Thesauri der Fallstudie sind a) TheSoz (Thesaurus Sozialwissenschaften, GESIS), b) STW (Standard-Thesaurus Wirtschaft, ZBW) und c) IBLK-Thesaurus (SWP).
    Footnote
    Beitrag für: 25. Oberhofer Kolloquium 2010: Recherche im Google-Zeitalter - Vollständig und präzise!?.
  20. Leiva-Mederos, A.; Senso, J.A.; Hidalgo-Delgado, Y.; Hipola, P.: Working framework of semantic interoperability for CRIS with heterogeneous data sources (2017) 0.01
    0.0089090215 = product of:
      0.066817656 = sum of:
        0.024604581 = weight(_text_:evaluation in 3706) [ClassicSimilarity], result of:
          0.024604581 = score(doc=3706,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.18538132 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.03125 = fieldNorm(doc=3706)
        0.04221307 = weight(_text_:site in 3706) [ClassicSimilarity], result of:
          0.04221307 = score(doc=3706,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.24281834 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.03125 = fieldNorm(doc=3706)
      0.13333334 = coord(2/15)
    
    Abstract
    Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the "dimensions" included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison - by means of calculations of recall and precision - of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.

Years

Languages

  • e 139
  • d 50
  • pt 1
  • More… Less…

Types

  • a 122
  • el 58
  • m 15
  • x 9
  • r 8
  • s 7
  • p 2
  • n 1
  • More… Less…