Search (313 results, page 1 of 16)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.11
    0.10852432 = product of:
      0.2984419 = sum of:
        0.027219517 = product of:
          0.08165855 = sum of:
            0.08165855 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.08165855 = score(doc=1000,freq=2.0), product of:
                0.17435429 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02056547 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.08165855 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.08165855 = score(doc=1000,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.009666242 = weight(_text_:und in 1000) [ClassicSimilarity], result of:
          0.009666242 = score(doc=1000,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.21206908 = fieldWeight in 1000, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.009666242 = weight(_text_:und in 1000) [ClassicSimilarity], result of:
          0.009666242 = score(doc=1000,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.21206908 = fieldWeight in 1000, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.08165855 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.08165855 = score(doc=1000,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.08165855 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.08165855 = score(doc=1000,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0029728229 = weight(_text_:in in 1000) [ClassicSimilarity], result of:
          0.0029728229 = score(doc=1000,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.10626988 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 1000) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=1000,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
      0.36363637 = coord(8/22)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.10
    0.104271844 = product of:
      0.32771152 = sum of:
        0.108042985 = weight(_text_:4800 in 3934) [ClassicSimilarity], result of:
          0.108042985 = score(doc=3934,freq=4.0), product of:
            0.16864467 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.02056547 = queryNorm
            0.6406546 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.059156876 = weight(_text_:informatik in 3934) [ClassicSimilarity], result of:
          0.059156876 = score(doc=3934,freq=8.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.563749 = fieldWeight in 3934, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.06728067 = weight(_text_:lecture in 3934) [ClassicSimilarity], result of:
          0.06728067 = score(doc=3934,freq=4.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.50555784 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.045858577 = weight(_text_:notes in 3934) [ClassicSimilarity], result of:
          0.045858577 = score(doc=3934,freq=4.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.41738418 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.005561643 = weight(_text_:in in 3934) [ClassicSimilarity], result of:
          0.005561643 = score(doc=3934,freq=14.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.19881277 = fieldWeight in 3934, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.033927888 = weight(_text_:computer in 3934) [ClassicSimilarity], result of:
          0.033927888 = score(doc=3934,freq=10.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.45142862 = fieldWeight in 3934, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.0078828735 = product of:
          0.015765747 = sum of:
            0.015765747 = weight(_text_:science in 3934) [ClassicSimilarity], result of:
              0.015765747 = score(doc=3934,freq=8.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.2910318 = fieldWeight in 3934, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.3181818 = coord(7/22)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Classification
    SS 4800
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    LCSH
    Computer science
    Computer Science
    RSWK
    RDF <Informatik> / Terminologische Logik
    OWL <Informatik>
    RVK
    SS 4800
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    RDF <Informatik> / Terminologische Logik
    OWL <Informatik>
    Computer science
    Computer Science
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.09
    0.08810315 = product of:
      0.38765386 = sum of:
        0.038107324 = product of:
          0.11432197 = sum of:
            0.11432197 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.11432197 = score(doc=306,freq=2.0), product of:
                0.17435429 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.02056547 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.11432197 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.11432197 = score(doc=306,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.11432197 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.11432197 = score(doc=306,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.11432197 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.11432197 = score(doc=306,freq=2.0), product of:
            0.17435429 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.02056547 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.0065806243 = weight(_text_:in in 306) [ClassicSimilarity], result of:
          0.0065806243 = score(doc=306,freq=10.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.23523843 = fieldWeight in 306, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.22727273 = coord(5/22)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.04
    0.037929356 = product of:
      0.16688916 = sum of:
        0.011246519 = product of:
          0.022493038 = sum of:
            0.022493038 = weight(_text_:29 in 3926) [ClassicSimilarity], result of:
              0.022493038 = score(doc=3926,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.31092256 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.5 = coord(1/2)
        0.076119386 = weight(_text_:lecture in 3926) [ClassicSimilarity], result of:
          0.076119386 = score(doc=3926,freq=2.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.5719734 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.051883057 = weight(_text_:notes in 3926) [ClassicSimilarity], result of:
          0.051883057 = score(doc=3926,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.47221628 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.003363365 = weight(_text_:in in 3926) [ClassicSimilarity], result of:
          0.003363365 = score(doc=3926,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.120230645 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.024276821 = weight(_text_:computer in 3926) [ClassicSimilarity], result of:
          0.024276821 = score(doc=3926,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.32301605 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.22727273 = coord(5/22)
    
    Pages
    S.29-63
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  5. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.03
    0.03494722 = product of:
      0.15376776 = sum of:
        0.06728067 = weight(_text_:lecture in 4607) [ClassicSimilarity], result of:
          0.06728067 = score(doc=4607,freq=4.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.50555784 = fieldWeight in 4607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.045858577 = weight(_text_:notes in 4607) [ClassicSimilarity], result of:
          0.045858577 = score(doc=4607,freq=4.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.41738418 = fieldWeight in 4607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.00364095 = weight(_text_:in in 4607) [ClassicSimilarity], result of:
          0.00364095 = score(doc=4607,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 4607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.015173013 = weight(_text_:computer in 4607) [ClassicSimilarity], result of:
          0.015173013 = score(doc=4607,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.20188503 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.02181456 = sum of:
          0.0078828735 = weight(_text_:science in 4607) [ClassicSimilarity], result of:
            0.0078828735 = score(doc=4607,freq=2.0), product of:
              0.0541719 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.02056547 = queryNorm
              0.1455159 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
          0.013931687 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
            0.013931687 = score(doc=4607,freq=2.0), product of:
              0.072016776 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02056547 = queryNorm
              0.19345059 = fieldWeight in 4607, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4607)
      0.22727273 = coord(5/22)
    
    Abstract
    Smart applications behave intelligently because they understand at least partially the context where they operate. To do this, they need not only a formal domain model but also formal descriptions of the data they process and their own operational behaviour. Interoperability of smart applications is based on formalised definitions of all their data and processes. This paper studies the semantic interoperability of data in the case of eLearning and describes an experiment and its assessment. New content is imported into a knowledge-based learning environment without real updates of the original domain model, which is encoded as a knowledge base of conceptual graphs. A component called mediator enables the import by assigning dummy metadata annotations for the imported items. However, some functionality of the original system is lost, when processing the imported content, due to the lack of proper metadata annotation which cannot be associated fully automatically. So the paper presents an interoperability scenario when appropriate content items are viewed from the perspective of the original world and can be (partially) reused there.
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  6. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.03
    0.028653169 = product of:
      0.12607394 = sum of:
        0.057089545 = weight(_text_:lecture in 2186) [ClassicSimilarity], result of:
          0.057089545 = score(doc=2186,freq=2.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.42898005 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.038912293 = weight(_text_:notes in 2186) [ClassicSimilarity], result of:
          0.038912293 = score(doc=2186,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.35416222 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.007134775 = weight(_text_:in in 2186) [ClassicSimilarity], result of:
          0.007134775 = score(doc=2186,freq=16.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.25504774 = fieldWeight in 2186, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.018207615 = weight(_text_:computer in 2186) [ClassicSimilarity], result of:
          0.018207615 = score(doc=2186,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.24226204 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 2186) [ClassicSimilarity], result of:
              0.009459447 = score(doc=2186,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
    Series
    Lecture notes in computer science; vol. 2870
  7. Neumaier, S.: Data integration for open data on the Web (2017) 0.02
    0.021549374 = product of:
      0.11852155 = sum of:
        0.06728067 = weight(_text_:lecture in 3923) [ClassicSimilarity], result of:
          0.06728067 = score(doc=3923,freq=4.0), product of:
            0.13308205 = queryWeight, product of:
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.02056547 = queryNorm
            0.50555784 = fieldWeight in 3923, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4711404 = idf(docFreq=185, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.032426912 = weight(_text_:notes in 3923) [ClassicSimilarity], result of:
          0.032426912 = score(doc=3923,freq=2.0), product of:
            0.10987139 = queryWeight, product of:
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.02056547 = queryNorm
            0.29513517 = fieldWeight in 3923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3425174 = idf(docFreq=574, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.00364095 = weight(_text_:in in 3923) [ClassicSimilarity], result of:
          0.00364095 = score(doc=3923,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 3923, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.015173013 = weight(_text_:computer in 3923) [ClassicSimilarity], result of:
          0.015173013 = score(doc=3923,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.20188503 = fieldWeight in 3923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.18181819 = coord(4/22)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  8. Mayr, P.: Information Retrieval-Mehrwertdienste für Digitale Bibliotheken: : Crosskonkordanzen und Bradfordizing (2010) 0.02
    0.017219579 = product of:
      0.09470768 = sum of:
        0.016404156 = weight(_text_:und in 4910) [ClassicSimilarity], result of:
          0.016404156 = score(doc=4910,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.35989314 = fieldWeight in 4910, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
        0.056258842 = weight(_text_:schriftenreihe in 4910) [ClassicSimilarity], result of:
          0.056258842 = score(doc=4910,freq=2.0), product of:
            0.13211027 = queryWeight, product of:
              6.4238877 = idf(docFreq=194, maxDocs=44218)
              0.02056547 = queryNorm
            0.42584762 = fieldWeight in 4910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4238877 = idf(docFreq=194, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
        0.016404156 = weight(_text_:und in 4910) [ClassicSimilarity], result of:
          0.016404156 = score(doc=4910,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.35989314 = fieldWeight in 4910, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
        0.005640535 = weight(_text_:in in 4910) [ClassicSimilarity], result of:
          0.005640535 = score(doc=4910,freq=10.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.20163295 = fieldWeight in 4910, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
      0.18181819 = coord(4/22)
    
    Abstract
    In dieser Arbeit werden zwei Mehrwertdienste für Suchsysteme vorgestellt, die typische Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden in diesem Buch ausführlich beschrieben und evaluiert. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Ergebnisse dieser Arbeit sind in das GESIS-Projekt IRM eingeflossen.
    Footnote
    Rez. in: iwp 62(2011) H.6/7, S. 323-324 (D. Lewandowski)
    Series
    GESIS-Schriftenreihe; 5
  9. Ertugrul, M.: Probleme und Lösungen zur semantischen Interoperabilität (2013) 0.01
    0.014572006 = product of:
      0.106861375 = sum of:
        0.066928364 = weight(_text_:informatik in 1115) [ClassicSimilarity], result of:
          0.066928364 = score(doc=1115,freq=4.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.6378092 = fieldWeight in 1115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0625 = fieldNorm(doc=1115)
        0.019966504 = weight(_text_:und in 1115) [ClassicSimilarity], result of:
          0.019966504 = score(doc=1115,freq=10.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.438048 = fieldWeight in 1115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1115)
        0.019966504 = weight(_text_:und in 1115) [ClassicSimilarity], result of:
          0.019966504 = score(doc=1115,freq=10.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.438048 = fieldWeight in 1115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1115)
      0.13636364 = coord(3/22)
    
    Abstract
    Mit der Einführung von EAI im eigenen Unternehmen verbinden die meisten Unternehmer positive Ergebnisse und Wirkungen beim Betrieb ihrer IT-Systeme. Die Koexistenz vieler verschiedener Anwendungssysteme, Betriebssysteme und/oder Hardware spielt nur noch eine untergeordnete Rolle. Die Lauffähigkeit der Geschäftsprozesse wird durch EAI gewährleistet, und das nicht nur unternehmensintern, sondern auch -extern. Vergleichbar mit einem Plug & Play wird den Unternehmen suggeriert, dass eine EAI einfach und schnell eine homogene Umgebung wenn auch virtuell - erzeugt.
    Footnote
    Studienarbeit aus dem Jahr 2009 im Fachbereich Informatik - Theoretische Informatik, einseitig bedruckt, Note: 1,0, AKAD Fachhochschule Stuttgart.
  10. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2010) 0.01
    0.013732365 = product of:
      0.060422406 = sum of:
        0.029578438 = weight(_text_:informatik in 3944) [ClassicSimilarity], result of:
          0.029578438 = score(doc=3944,freq=2.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.2818745 = fieldWeight in 3944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
        0.0070290747 = product of:
          0.014058149 = sum of:
            0.014058149 = weight(_text_:29 in 3944) [ClassicSimilarity], result of:
              0.014058149 = score(doc=3944,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.19432661 = fieldWeight in 3944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3944)
          0.5 = coord(1/2)
        0.004700446 = weight(_text_:in in 3944) [ClassicSimilarity], result of:
          0.004700446 = score(doc=3944,freq=10.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.16802745 = fieldWeight in 3944, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
        0.015173013 = weight(_text_:computer in 3944) [ClassicSimilarity], result of:
          0.015173013 = score(doc=3944,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.20188503 = fieldWeight in 3944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 3944) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=3944,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 3944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3944)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Purpose - The paper aims to develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach - Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies, UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings - The major findings showed that, given the large variety of terminology resources distributed throughout the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made, outlining the important approaches and features that support such a cross-browsing middleware service. Originality/value - Cross-browsing features are lacking in current library portal meta-search systems. Users are therefore deprived of this valuable retrieval provision. This research investigated the case for such a system and developed a prototype to fill this gap.
    Date
    29. 8.2010 10:19:18
    Field
    Informatik
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  11. Galinski, C.: Fragen der semantischen Interoperabilität brechen jetzt überall auf (o.J.) 0.01
    0.013597721 = product of:
      0.059829973 = sum of:
        0.01159949 = weight(_text_:und in 4183) [ClassicSimilarity], result of:
          0.01159949 = score(doc=4183,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2544829 = fieldWeight in 4183, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4183)
        0.01159949 = weight(_text_:und in 4183) [ClassicSimilarity], result of:
          0.01159949 = score(doc=4183,freq=6.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.2544829 = fieldWeight in 4183, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4183)
        0.0025225237 = weight(_text_:in in 4183) [ClassicSimilarity], result of:
          0.0025225237 = score(doc=4183,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.09017298 = fieldWeight in 4183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4183)
        0.025749456 = weight(_text_:computer in 4183) [ClassicSimilarity], result of:
          0.025749456 = score(doc=4183,freq=4.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.34261024 = fieldWeight in 4183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4183)
        0.008359012 = product of:
          0.016718024 = sum of:
            0.016718024 = weight(_text_:22 in 4183) [ClassicSimilarity], result of:
              0.016718024 = score(doc=4183,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23214069 = fieldWeight in 4183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4183)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Terminologienormung ist eine Grundbedingung für internationalen Handel und birgt große Einsparungspotenziale in der industriellen Produktion. Tassilo Pellegrini sprach mit Dr. Christian Galinski, Leiter des Internationalen Informationszentrums für Terminologie (Infoterm), über Probleme der Mehrsprachigkeit, di Wechselwirkung zwischen Technik und Terminologie und wie durch semantische Interoperabilität die Schnittstelle Mensch-Maschine verbessert werden kann.
    Content
    "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen." ... "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen."
    Date
    22. 1.2011 10:16:32
  12. Köbler, J.; Niederklapfer, T.: Kreuzkonkordanzen zwischen RVK-BK-MSC-PACS der Fachbereiche Mathematik un Physik (2010) 0.01
    0.013060273 = product of:
      0.0574652 = sum of:
        0.016404156 = weight(_text_:und in 4408) [ClassicSimilarity], result of:
          0.016404156 = score(doc=4408,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.35989314 = fieldWeight in 4408, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.016404156 = weight(_text_:und in 4408) [ClassicSimilarity], result of:
          0.016404156 = score(doc=4408,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.35989314 = fieldWeight in 4408, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.011928735 = product of:
          0.02385747 = sum of:
            0.02385747 = weight(_text_:29 in 4408) [ClassicSimilarity], result of:
              0.02385747 = score(doc=4408,freq=4.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.3297832 = fieldWeight in 4408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.5 = coord(1/2)
        0.0043691397 = weight(_text_:in in 4408) [ClassicSimilarity], result of:
          0.0043691397 = score(doc=4408,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1561842 = fieldWeight in 4408, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4408)
        0.008359012 = product of:
          0.016718024 = sum of:
            0.016718024 = weight(_text_:22 in 4408) [ClassicSimilarity], result of:
              0.016718024 = score(doc=4408,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23214069 = fieldWeight in 4408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4408)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Unser Projekt soll eine Kreuzkonkordanz zwischen den Universalklassifikationen wie der "Regensburger Verbundsklassifikation (RVK)" und der "Basisklassifikation (BK)" sowie den Fachklassifikationen "Mathematics Subject Classification (MSC2010)" und "Physics and Astronomy Classification Scheme (PACS2010)" in den Fachgebieten Mathematik und Physik herstellen. Fazit: "Die klassifikatorische Übereinstmmung zwischen Regensburger Verbundklassifikation und Physics and Astronomy Classification Scheme war in einzelnen Fachbereichen (z. B. Kernphysik) recht gut. Doch andere Fachbereiche (z.B. Polymerphysik, Mineralogie) stimmten sehr wenig überein. Insgesamt konnten wir 890 einfache Verbindungen erstellen. Mehrfachverbindungen wurden aus technischen Gründen nicht mitgezählt. Das Projekt war insgesamt sehr umfangreich, daher konnte es im Rahmen der zwanzig Projekttage nicht erschöpfend behandelt werden. Eine Weiterentwicklung, insbesondere hinsichtlich des kollektiven Zuganges in Form eines Webformulars und der automatischen Klassifizierung erscheint jedoch sinnvoll."
    Date
    29. 3.2011 10:47:10
    29. 3.2011 10:57:42
    Imprint
    Innsbruck : Universitäts- und Landesbibliothek Tirol
    Pages
    22 S
  13. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.01
    0.012326608 = product of:
      0.054237075 = sum of:
        0.019460939 = weight(_text_:und in 2439) [ClassicSimilarity], result of:
          0.019460939 = score(doc=2439,freq=38.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.42695636 = fieldWeight in 2439, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.019460939 = weight(_text_:und in 2439) [ClassicSimilarity], result of:
          0.019460939 = score(doc=2439,freq=38.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.42695636 = fieldWeight in 2439, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.0056232596 = product of:
          0.011246519 = sum of:
            0.011246519 = weight(_text_:29 in 2439) [ClassicSimilarity], result of:
              0.011246519 = score(doc=2439,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.15546128 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
          0.5 = coord(1/2)
        0.0041192644 = weight(_text_:in in 2439) [ClassicSimilarity], result of:
          0.0041192644 = score(doc=2439,freq=12.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.14725187 = fieldWeight in 2439, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.005572675 = product of:
          0.01114535 = sum of:
            0.01114535 = weight(_text_:22 in 2439) [ClassicSimilarity], result of:
              0.01114535 = score(doc=2439,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.15476047 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Der Aufbau der Europäischen Digitalen Bibliothek wurde im Herbst 2007 auf soliden Grund gestellt: Mit der European Digital Library Foundation steht eine geschäftsfähige Organisation als Trägerin der Europäischen Digitalen Bibliothek zur Verfügung. Sie fungiert zunächst als Steuerungsgremium für das EU-finanzierte Projekt EDLnet und übernimmt sukzessive die Aufgaben, die für den Aufbau und die Weiterentwicklung der Europäischen Digitalen Bibliothek notwendig sind. Die Gründungsmitglieder sind zehn europäische Dachorganisationen aus den Bereichen Bibliothek, Archiv, audiovisuelle Sammlungen und Museen. Vorstandsmitglieder sind die Vorsitzende Elisabeth Niggemann (CENL) die Vize-Vorsitzende Martine de Boisdeffre (EURBICA), der Schatzmeister Edwin van Huis (FIAT) sowie Wim van Drimmelen, der Generaldirektor der Koninklijke Bibliotheek, der Nationalbibliothek der Niederlande, welche die Europäische Digitale Bibliothek hostet. Der Prototyp für die Europäische Digitale Bibliothek wird im Rahmen des EDLnet-Projekts entwickelt. Die erste Version des Prototyps wurde auf der internationalen Konferenz »One more step towards the European Digital Library« vorgestellt, die am 31. Januar und 1. Februar 2008 in der Deutschen Nationalbibliothek (DNB) in Frankfurt am Main stattfand. Die endgültige Version des Prototyps wird im November 2008 von der EU-Kommissarin für Informationsgesellschaft und Medien, Viviane Reding, in Paris vorgestellt werden. Dieser Prototyp wird direkten Zugang zu mindestens zwei Mio. digitalisierten Büchern, Fotografien, Karten, Tonaufzeichnungen, Filmaufnahmen und Archivalien aus Bibliotheken, Archiven, audiovisuellen Sammlungen und Museen Europas bieten.
    Content
    Darin u.a. "Interoperabilität als Kernstück - Technische und semantische Interoperabilität bilden somit das Kernstück für das Funktionieren der Europäischen Digitalen Bibliothek. Doch bevor Wege gefunden werden können, wie etwas funktionieren kann, muss zunächst einmal festgelegt werden, was funktionieren soll. Hierfür sind die Nutzeranforderungen das Maß aller Dinge, weshalb sich ein ganzes Arbeitspaket in EDLnet mit der Nutzersicht, den Nutzeranforderungen und der Nutzbarkeit der Europäischen Digitalen Bibliothek befasst, Anforderungen formuliert und diese im Arbeitspaket »Interoperabilität« umgesetzt werden. Für die Entscheidung, welche Inhalte wie präsentiert werden, sind jedoch nicht allein technische und semantische Fragestellungen zu klären, sondern auch ein Geschäftsmodell zu entwickeln, das festlegt, was die beteiligten Institutionen und Organisationen in welcher Form zu welchen Bedingungen zur Europäischen Digitalen Bibliothek beitragen. Auch das Geschäftsmodell wird Auswirkungen auf technische und semantische Interoperabilität haben und liefert die daraus abgeleiteten Anforderungen zur Umsetzung an das entsprechende Arbeitspaket. Im EDLnet-Projekt ist somit ein ständiger Arbeitskreislauf installiert, in welchem die Anforderungen an die Europäische Digitale Bibliothek formuliert, an das Interoperabilitäts-Arbeitspaket weitergegeben und dort umgesetzt werden. Diese Lösung wird wiederum an die Arbeitspakete »Nutzersicht« und »Geschäftsmodell« zurückgemeldet, getestet, kommentiert und für die Kommentare wiederum technische Lösungen gesucht. Dies ist eine Form des »rapid prototyping«, das hier zur Anwendung kommt, d. h. die Funktionalitäten werden schrittweise gemäß des Feedbacks der zukünftigen Nutzer sowie der Projektpartner erweitert und gleichzeitig wird der Prototyp stets lauffähig gehalten und bis zur Produktreife weiterentwickelt. Hierdurch verspricht man sich ein schnelles Ergebnis bei geringem Risiko einer Fehlentwicklung durch das ständige Feedback."
    Date
    22. 2.2009 19:10:56
    Source
    Dialog mit Bibliotheken. 20(2008) H.1, S.29-31
  14. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.01
    0.011838465 = product of:
      0.052089244 = sum of:
        0.013393938 = weight(_text_:und in 4379) [ClassicSimilarity], result of:
          0.013393938 = score(doc=4379,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.013393938 = weight(_text_:und in 4379) [ClassicSimilarity], result of:
          0.013393938 = score(doc=4379,freq=8.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.29385152 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.008434889 = product of:
          0.016869778 = sum of:
            0.016869778 = weight(_text_:29 in 4379) [ClassicSimilarity], result of:
              0.016869778 = score(doc=4379,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23319192 = fieldWeight in 4379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
        0.0050450475 = weight(_text_:in in 4379) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=4379,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 4379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.011821429 = product of:
          0.023642858 = sum of:
            0.023642858 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.023642858 = score(doc=4379,freq=4.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.22727273 = coord(5/22)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  15. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.01
    0.011785041 = product of:
      0.06481772 = sum of:
        0.033464182 = weight(_text_:informatik in 168) [ClassicSimilarity], result of:
          0.033464182 = score(doc=168,freq=4.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.3189046 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.004756517 = weight(_text_:in in 168) [ClassicSimilarity], result of:
          0.004756517 = score(doc=168,freq=16.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.17003182 = fieldWeight in 168, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.021024343 = weight(_text_:computer in 168) [ClassicSimilarity], result of:
          0.021024343 = score(doc=168,freq=6.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.2797401 = fieldWeight in 168, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.005572675 = product of:
          0.01114535 = sum of:
            0.01114535 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.01114535 = score(doc=168,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    Semantic integration (Computer systems)
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Semantic integration (Computer systems)
  16. Hubrich, J.: CrissCross: SWD-DDC-Mapping (2008) 0.01
    0.011202999 = product of:
      0.061616495 = sum of:
        0.021872208 = weight(_text_:und in 2175) [ClassicSimilarity], result of:
          0.021872208 = score(doc=2175,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.47985753 = fieldWeight in 2175, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2175)
        0.021872208 = weight(_text_:und in 2175) [ClassicSimilarity], result of:
          0.021872208 = score(doc=2175,freq=12.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.47985753 = fieldWeight in 2175, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2175)
        0.00672673 = weight(_text_:in in 2175) [ClassicSimilarity], result of:
          0.00672673 = score(doc=2175,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.24046129 = fieldWeight in 2175, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2175)
        0.01114535 = product of:
          0.0222907 = sum of:
            0.0222907 = weight(_text_:22 in 2175) [ClassicSimilarity], result of:
              0.0222907 = score(doc=2175,freq=2.0), product of:
                0.072016776 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02056547 = queryNorm
                0.30952093 = fieldWeight in 2175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2175)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    Mit zunehmender Nutzung netzwerkbasierter Systeme und steigender internationaler Zusammenarbeit werden Informationsräume geschaffen, in denen unterschiedlich indexierte Informationsressourcen aus verschiedenen Ländern über das Internet frei zugänglich gemacht werden. Bedingt durch die Unterschiedlichkeit der verwendeten Erschließungsinstrumente in Sprache und Struktur gestaltet sich die thematische Recherche in diesen Informationsräumen umständlich und langwierig. Im Rahmen des von der Deutschen Forschungsgemeinschaft (DFG) geförderten und von der Deutschen Nationalbibliothek (DNB) in Kooperation mit der Fachhochschule Köln durchgeführten Projekts CrissCross wird ein multilinguales, thesaurusbasiertes und benutzergerechtes Recherchevokabular erstellt, das einen wesentlich effizienteren Zugriff auf heterogen erschlossene Informationsressourcen ermöglicht.
    Date
    22. 8.2009 10:35:21
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 61(2008) H.3, S.50-58
  17. Ehrig, M.; Studer, R.: Wissensvernetzung durch Ontologien (2006) 0.01
    0.010996766 = product of:
      0.06048221 = sum of:
        0.029578438 = weight(_text_:informatik in 5901) [ClassicSimilarity], result of:
          0.029578438 = score(doc=5901,freq=2.0), product of:
            0.104934774 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.02056547 = queryNorm
            0.2818745 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.012479064 = weight(_text_:und in 5901) [ClassicSimilarity], result of:
          0.012479064 = score(doc=5901,freq=10.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.27378 = fieldWeight in 5901, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.012479064 = weight(_text_:und in 5901) [ClassicSimilarity], result of:
          0.012479064 = score(doc=5901,freq=10.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.27378 = fieldWeight in 5901, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.0059456457 = weight(_text_:in in 5901) [ClassicSimilarity], result of:
          0.0059456457 = score(doc=5901,freq=16.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.21253976 = fieldWeight in 5901, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
      0.18181819 = coord(4/22)
    
    Abstract
    In der Informatik sind Ontologien formale Modelle eines Anwendungsbereiches, die die Kommunikation zwischen menschlichen und/oder maschinellen Akteuren unterstützen und damit den Austausch und das Teilen von Wissen in Unternehmen erleichtern. Ontologien zur strukturierten Darstellung von Wissen zu nutzen hat deshalb in den letzten Jahren zunehmende Verbreitung gefunden. Schon heute existieren weltweit tausende Ontologien. Um Interoperabilität zwischen darauf aufbauenden Softwareagenten oder Webservices zu ermöglichen, ist die semantische Integration der Ontologien eine zwingendnotwendige Vorraussetzung. Wie man sieh leicht verdeutlichen kann, ist die rein manuelle Erstellung der Abbildungen ab einer bestimmten Größe. Komplexität und Veränderungsrate der Ontologien nicht mehr ohne weiteres möglich. Automatische oder semiautomatische Technologien müssen den Nutzer darin unterstützen. Das Integrationsproblem beschäftigt Forschung und Industrie schon seit vielen Jahren z. B. im Bereich der Datenbankintegration. Neu ist jedoch die Möglichkeit komplexe semantische Informationen. wie sie in Ontologien vorhanden sind, einzubeziehen. Zur Ontologieintegration wird in diesem Kapitel ein sechsstufiger genereller Prozess basierend auf den semantischen Strukturen eingeführt. Erweiterungen beschäftigen sich mit der Effizienz oder der optimalen Nutzereinbindung in diesen Prozess. Außerdem werden zwei Anwendungen vorgestellt, in denen dieser Prozess erfolgreich umgesetzt wurde. In einem abschließenden Fazit werden neue aktuelle Trends angesprochen. Da die Ansätze prinzipiell auf jedes Schema übertragbar sind, das eine semantische Basis enthält, geht der Einsatzbereich dieser Forschung weit über reine Ontologieanwendungen hinaus.
  18. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.01
    0.0106608225 = product of:
      0.07817936 = sum of:
        0.0042042066 = weight(_text_:in in 3278) [ClassicSimilarity], result of:
          0.0042042066 = score(doc=3278,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15028831 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.030346027 = weight(_text_:computer in 3278) [ClassicSimilarity], result of:
          0.030346027 = score(doc=3278,freq=2.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.40377006 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.04362912 = sum of:
          0.015765747 = weight(_text_:science in 3278) [ClassicSimilarity], result of:
            0.015765747 = score(doc=3278,freq=2.0), product of:
              0.0541719 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.02056547 = queryNorm
              0.2910318 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
          0.027863374 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
            0.027863374 = score(doc=3278,freq=2.0), product of:
              0.072016776 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02056547 = queryNorm
              0.38690117 = fieldWeight in 3278, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3278)
      0.13636364 = coord(3/22)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  19. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.01
    0.010278928 = product of:
      0.056534104 = sum of:
        0.0070290747 = product of:
          0.014058149 = sum of:
            0.014058149 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.014058149 = score(doc=2192,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
        0.00514908 = weight(_text_:in in 2192) [ClassicSimilarity], result of:
          0.00514908 = score(doc=2192,freq=12.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18406484 = fieldWeight in 2192, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.033927888 = weight(_text_:computer in 2192) [ClassicSimilarity], result of:
          0.033927888 = score(doc=2192,freq=10.0), product of:
            0.0751567 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02056547 = queryNorm
            0.45142862 = fieldWeight in 2192, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.010428062 = product of:
          0.020856123 = sum of:
            0.020856123 = weight(_text_:science in 2192) [ClassicSimilarity], result of:
              0.020856123 = score(doc=2192,freq=14.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.38499892 = fieldWeight in 2192, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.18181819 = coord(4/22)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Computer science
    Text processing (Computer science)
    Series
    Communications in computer and information science; 478
    Subject
    Computer science
    Text processing (Computer science)
  20. Huckstorf, A.; Petras, V.: Mind the lexical gap : EuroVoc Building Block of the Semantic Web (2011) 0.01
    0.010028978 = product of:
      0.055159375 = sum of:
        0.021177674 = weight(_text_:und in 2782) [ClassicSimilarity], result of:
          0.021177674 = score(doc=2782,freq=20.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.46462005 = fieldWeight in 2782, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
        0.021177674 = weight(_text_:und in 2782) [ClassicSimilarity], result of:
          0.021177674 = score(doc=2782,freq=20.0), product of:
            0.04558063 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02056547 = queryNorm
            0.46462005 = fieldWeight in 2782, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
        0.008434889 = product of:
          0.016869778 = sum of:
            0.016869778 = weight(_text_:29 in 2782) [ClassicSimilarity], result of:
              0.016869778 = score(doc=2782,freq=2.0), product of:
                0.072342895 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23319192 = fieldWeight in 2782, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2782)
          0.5 = coord(1/2)
        0.0043691397 = weight(_text_:in in 2782) [ClassicSimilarity], result of:
          0.0043691397 = score(doc=2782,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1561842 = fieldWeight in 2782, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2782)
      0.18181819 = coord(4/22)
    
    Abstract
    Ein Konferenzereignis der besonderen Art fand am 18. und 19. November 2010 in Luxemburg statt. Initiiert durch das Amt für Veröffentlichungen der Europäischen Union (http://publications.europa.eu) waren Bibliothekare und Information Professionals eingeladen, um über die Zukunft mehrsprachiger kontrollierter Vokabulare in Informationssystemen und insbesondere deren Beitrag zum Semantic Web zu diskutieren. Organisiert wurde die Konferenz durch das EuroVoc-Team, das den Thesaurus der Europäischen Union bearbeitet. Die letzte EuroVoc-Konferenz fand im Jahr 2006 statt. In der Zwischenzeit ist EuroVoc zu einem ontologie-basierten Thesaurusmanagementsystem übergegangen und hat systematisch begonnen, Semantic-Web-Technologien für die Bearbeitung und Repräsentation einzusetzen und sich mit anderen Vokabularen zu vernetzen. Ein produktiver Austausch fand mit den Produzenten anderer europäischer und internationaler Vokabulare (z.B. United Nations oder FAO) sowie Vertretern aus Projekten, die an Themen über automatische Indexierung (hier insbesondere parlamentarische und rechtliche Dokumente) sowie Interoperabilitiät zwischen Vokabularen arbeiten, statt.
    Date
    29. 3.2013 17:46:08
    Source
    Information - Wissenschaft und Praxis. 62(2011) H.2/3, S.125-126
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus

Years

Languages

  • e 225
  • d 83
  • pt 1
  • More… Less…

Types

  • a 213
  • el 94
  • m 18
  • x 11
  • r 9
  • s 7
  • n 2
  • p 2
  • More… Less…

Subjects