Search (245 results, page 1 of 13)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.27
    0.26722354 = product of:
      0.5344471 = sum of:
        0.039221346 = product of:
          0.11766403 = sum of:
            0.11766403 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.11766403 = score(doc=1000,freq=2.0), product of:
                0.25123185 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029633347 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.017435152 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.017435152 = score(doc=1000,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11766403 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11766403 = score(doc=1000,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11766403 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11766403 = score(doc=1000,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.0071344664 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=1000,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11766403 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11766403 = score(doc=1000,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.11766403 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.11766403 = score(doc=1000,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(7/14)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.25
    0.25493872 = product of:
      0.71382844 = sum of:
        0.05490988 = product of:
          0.16472964 = sum of:
            0.16472964 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.16472964 = score(doc=306,freq=2.0), product of:
                0.25123185 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029633347 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Linked data and user interaction : the road ahead (2015) 0.05
    0.05424465 = product of:
      0.18985626 = sum of:
        0.046129078 = weight(_text_:web in 2552) [ClassicSimilarity], result of:
          0.046129078 = score(doc=2552,freq=14.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.47698978 = fieldWeight in 2552, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
        0.039021976 = weight(_text_:bibliothek in 2552) [ClassicSimilarity], result of:
          0.039021976 = score(doc=2552,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.32074454 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
        0.0050448296 = weight(_text_:information in 2552) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=2552,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 2552, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
        0.09966038 = weight(_text_:kongress in 2552) [ClassicSimilarity], result of:
          0.09966038 = score(doc=2552,freq=4.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.51258504 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
      0.2857143 = coord(4/14)
    
    Abstract
    This collection of research papers provides extensive information on deploying services, concepts, and approaches for using open linked data from libraries and other cultural heritage institutions. With a special emphasis on how libraries and other cultural heritage institutions can create effective end user interfaces using open, linked data or other datasets. These papers are essential reading for any one interesting in user interface design or the semantic web.
    Content
    H. Frank Cervone: Linked data and user interaction : an introduction -- Paola Di Maio: Linked Data Beyond Libraries Towards Universal Interfaces and Knowledge Unification -- Emmanuelle Bermes: Following the user's flow in the Digital Pompidou -- Patrick Le Bceuf: Customized OPACs on the Semantic Web : the OpenCat prototype -- Ryan Shaw, Patrick Golden and Michael Buckland: Using linked library data in working research notes -- Timm Heuss, Bernhard Humm.Tilman Deuschel, Torsten Frohlich, Thomas Herth and Oliver Mitesser: Semantically guided, situation-aware literature research -- Niklas Lindstrom and Martin Malmsten: Building interfaces on a networked graph -- Natasha Simons, Arve Solland and Jan Hettenhausen: Griffith Research Hub. Vgl.: http://d-nb.info/1032799889.
    LCSH
    Semantic Web
    RSWK
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Subject
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Semantic Web
    Theme
    Semantic Web
  4. Kempf, A.O.; Baum, K.: Von der Ein-Datenbank-Suche zum verteilten Suchszenario : Zum Aufbau von Crosskonkordanzen zwischen der Fachklassifikation Sozialwissenschaften und der Dewey-Dezimalklassifikation (2013) 0.04
    0.04418917 = product of:
      0.20621613 = sum of:
        0.055185407 = weight(_text_:bibliothek in 1654) [ClassicSimilarity], result of:
          0.055185407 = score(doc=1654,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.4536013 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
        0.010089659 = weight(_text_:information in 1654) [ClassicSimilarity], result of:
          0.010089659 = score(doc=1654,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
        0.14094105 = weight(_text_:kongress in 1654) [ClassicSimilarity], result of:
          0.14094105 = score(doc=1654,freq=2.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.72490466 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
      0.21428572 = coord(3/14)
    
    Content
    Folien eines Vortrages, 5. Kongress Bibliothek & Information Deutschland, Leipzig, 11.-14. März 2013.
  5. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.04
    0.04354609 = product of:
      0.12192906 = sum of:
        0.03856498 = weight(_text_:wide in 4379) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4379,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.020922182 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.020922182 = score(doc=4379,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.033111244 = weight(_text_:bibliothek in 4379) [ClassicSimilarity], result of:
          0.033111244 = score(doc=4379,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.27216077 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.01797477 = weight(_text_:retrieval in 4379) [ClassicSimilarity], result of:
          0.01797477 = score(doc=4379,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.01135588 = product of:
          0.03406764 = sum of:
            0.03406764 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.03406764 = score(doc=4379,freq=4.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
    Theme
    Klassifikationssysteme im Online-Retrieval
  6. Concepts in Context : Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010 (2011) 0.04
    0.041994397 = product of:
      0.14698038 = sum of:
        0.0071344664 = weight(_text_:information in 628) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=628,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=628)
        0.033494003 = weight(_text_:retrieval in 628) [ClassicSimilarity], result of:
          0.033494003 = score(doc=628,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37365708 = fieldWeight in 628, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=628)
        0.09966038 = weight(_text_:kongress in 628) [ClassicSimilarity], result of:
          0.09966038 = score(doc=628,freq=4.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.51258504 = fieldWeight in 628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0390625 = fieldNorm(doc=628)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
              0.020074548 = score(doc=628,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=628)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Content
    Winfried Gödert: Programmatic Issues and Introduction - Dagobert Soergel: Conceptual Foundations for Semantic Mapping and Semantic Search - Jan-Helge Jacobs, Tina Mengel, Katrin Müller: Insights and Outlooks: A Retrospective View on the CrissCross Project - Yvonne Jahns, Helga Karg: Translingual Retrieval: Moving between Vocabularies - MACS 2010 - Jessica Hubrich: Intersystem Relations: Characteristics and Functionalities - Stella G Dextre Clarke: In Pursuit of Interoperability: Can We Standardize Mapping Types? - Philipp Mayr, Philipp Schaer, Peter Mutschke: A Science Model Driven Retrieval Prototype - Claudia Effenberger, Julia Hauser: Would an Explicit Versioning of the DDC Bring Advantages for Retrieval? - Gordon Dunsire: Interoperability and Semantics in RDF Representations of FRBR, FRAD and FRSAD - Maja Zumer: FRSAD: Challenges of Modeling the Aboutness - Michael Panzer: Two Tales of a Concept: Aligning FRSAD with SKOS - Felix Boteram: Integrating Semantic Interoperability into FRSAD
    Date
    22. 2.2013 11:34:18
    RSWK
    Wissensorganisation / Information Retrieval / Kongress / Köln <2010>
    Subject
    Wissensorganisation / Information Retrieval / Kongress / Köln <2010>
  7. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.04
    0.035648394 = product of:
      0.0998155 = sum of:
        0.036359414 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.036359414 = score(doc=168,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.03416578 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.03416578 = score(doc=168,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.0069903214 = weight(_text_:information in 168) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=168,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 168, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.016946774 = weight(_text_:retrieval in 168) [ClassicSimilarity], result of:
          0.016946774 = score(doc=168,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.18905719 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.016059639 = score(doc=168,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    Ontologies (Information retrieval)
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Ontologies (Information retrieval)
    World wide web
  8. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.04
    0.035616066 = product of:
      0.12465622 = sum of:
        0.032137483 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.032137483 = score(doc=6061,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.052305456 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
          0.052305456 = score(doc=6061,freq=18.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5408555 = fieldWeight in 6061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.014268933 = weight(_text_:information in 6061) [ClassicSimilarity], result of:
          0.014268933 = score(doc=6061,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 6061, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.025944345 = weight(_text_:retrieval in 6061) [ClassicSimilarity], result of:
          0.025944345 = score(doc=6061,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 6061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.2857143 = coord(4/14)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
    Theme
    Semantic Web
  9. Mayr, P.; Walter, A.-K.: Einsatzmöglichkeiten von Crosskonkordanzen (2007) 0.03
    0.03186875 = product of:
      0.14872082 = sum of:
        0.027896244 = weight(_text_:web in 162) [ClassicSimilarity], result of:
          0.027896244 = score(doc=162,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2884563 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
        0.008071727 = weight(_text_:information in 162) [ClassicSimilarity], result of:
          0.008071727 = score(doc=162,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
        0.11275284 = weight(_text_:kongress in 162) [ClassicSimilarity], result of:
          0.11275284 = score(doc=162,freq=2.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.57992375 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
      0.21428572 = coord(3/14)
    
    Abstract
    Der Beitrag stellt Einsatzmöglichkeiten und spezifische Problembereiche von Crosskonkordanzen (CK) im Projekt "Kompetenznetzwerk Modellbildung und Heterogenitätsbehand lung" (KoMoHe) so wie das Netz der bis dato entstandenen Terminologie-Überstiege vor. Die am IZ entstandenen CK sollen künftig über einen Terminologie-Service als Web Service genutzt werden, dieser wird im Beitrag exemplarisch vorgestellt. Des Weiteren wird ein Testszenario samt Evaluationsdesign beschrieben über das der Mehrwert von Crosskonkordanzen empirisch untersucht werden kann.
    Content
    Auch in: Lokal-Global: Vernetzung wissenschaftlicher Infrastrukturen; 12. Kongress der IuK-Initiative der Wissenschaftlichen Fachgesellschaft in Deutschland, Tagungsberichte, GESIS-IZ Sozialwissenschaft, Bonn, S.149-166.
    Source
    http://www.gesis.org/Information/Forschungsuebersichten/Tagungsberichte/Vernetzung/Mayr-Walter.pdf
  10. Niggemann, E.: Wer suchet, der findet? : Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek (2006) 0.03
    0.026833802 = product of:
      0.12522441 = sum of:
        0.051252894 = weight(_text_:elektronische in 5812) [ClassicSimilarity], result of:
          0.051252894 = score(doc=5812,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.36573824 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.066908754 = weight(_text_:bibliothek in 5812) [ClassicSimilarity], result of:
          0.066908754 = score(doc=5812,freq=6.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.54996234 = fieldWeight in 5812, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.0070627616 = weight(_text_:information in 5812) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=5812,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
      0.21428572 = coord(3/14)
    
    Abstract
    Elektronische Bibliothekskataloge und Bibliografien haben ihr Monopol bei der Suche nach Büchern, Aufsätzen, musikalischen Werken u. a. verloren. Globale Suchmaschinen sind starke Konkurrenten, und Bibliotheken müssen heute so planen, dass ihre Dienstleistungen auch morgen noch interessant sind. Die Deutsche Bibliothek (DDB) wird ihre traditionelle Katalogrecherche zu einem globalen, netzbasierten Informationssystem erweitern, das die Vorteile der neutralen, qualitätsbasierten Katalogsuche mit den Vorteilen moderner Suchmaschinen zu verbinden sucht. Dieser Beitrag beschäftigt sich mit der Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek. Weitere Entwicklungsstränge sollen nur kurz im Ausblick angerissen werden.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  11. Semantic search over the Web (2012) 0.03
    0.026162026 = product of:
      0.09156709 = sum of:
        0.04626069 = weight(_text_:web in 411) [ClassicSimilarity], result of:
          0.04626069 = score(doc=411,freq=22.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.47835067 = fieldWeight in 411, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
        0.029287368 = weight(_text_:elektronische in 411) [ClassicSimilarity], result of:
          0.029287368 = score(doc=411,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.20899329 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
        0.0040358636 = weight(_text_:information in 411) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=411,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
        0.0119831795 = weight(_text_:retrieval in 411) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=411,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
      0.2857143 = coord(4/14)
    
    Abstract
    The Web has become the world's largest database, with search being the main tool that allows organizations and individuals to exploit its huge amount of information. Search on the Web has been traditionally based on textual and structural similarities, ignoring to a large degree the semantic dimension, i.e., understanding the meaning of the query and of the document content. Combining search and semantics gives birth to the idea of semantic search. Traditional search engines have already advertised some semantic dimensions. Some of them, for instance, can enhance their generated result sets with documents that are semantically related to the query terms even though they may not include these terms. Nevertheless, the exploitation of the semantic search has not yet reached its full potential. In this book, Roberto De Virgilio, Francesco Guerra and Yannis Velegrakis present an extensive overview of the work done in Semantic Search and other related areas. They explore different technologies and solutions in depth, making their collection a valuable and stimulating reading for both academic and industrial researchers. The book is divided into three parts. The first introduces the readers to the basic notions of the Web of Data. It describes the different kinds of data that exist, their topology, and their storing and indexing techniques. The second part is dedicated to Web Search. It presents different types of search, like the exploratory or the path-oriented, alongside methods for their efficient and effective implementation. Other related topics included in this part are the use of uncertainty in query answering, the exploitation of ontologies, and the use of semantics in mashup design and operation. The focus of the third part is on linked data, and more specifically, on applying ideas originating in recommender systems on linked data management, and on techniques for the efficiently querying answering on linked data.
    Content
    Inhalt: Introduction.- Part I Introduction to Web of Data.- Topology of the Web of Data.- Storing and Indexing Massive RDF Data Sets.- Designing Exploratory Search Applications upon Web Data Sources.- Part II Search over the Web.- Path-oriented Keyword Search query over RDF.- Interactive Query Construction for Keyword Search on the SemanticWeb.- Understanding the Semantics of Keyword Queries on Relational DataWithout Accessing the Instance.- Keyword-Based Search over Semantic Data.- Semantic Link Discovery over Relational Data.- Embracing Uncertainty in Entity Linking.- The Return of the Entity-Relationship Model: Ontological Query Answering.- Linked Data Services and Semantics-enabled Mashup.- Part III Linked Data Search engines.- A Recommender System for Linked Data.- Flint: from Web Pages to Probabilistic Semantic Data.- Searching and Browsing Linked Data with SWSE.
    Footnote
    Elektronische Ausgabe unter: http://springer.r.delivery.net/r/r?2.1.Ee.2Tp.1gd0L5.C3WE8i..N.WdtM.3uq2.bW89MQ%5f%5fCYKEFOP0.
    Theme
    Semantic Web
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Tennis, J.T.: Versioning concept schemes for persistent retrieval (2006) 0.02
    0.020908974 = product of:
      0.073181406 = sum of:
        0.025709987 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1956,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.019725623 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
          0.019725623 = score(doc=1956,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2039694 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.0069903214 = weight(_text_:information in 1956) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=1956,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 1956, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.020755477 = weight(_text_:retrieval in 1956) [ClassicSimilarity], result of:
          0.020755477 = score(doc=1956,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23154683 = fieldWeight in 1956, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
      0.2857143 = coord(4/14)
    
    Abstract
    Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or classification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a consequence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking. The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a "model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies" in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision. As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking system for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
    Source
    Bulletin of the American Society for Information Science and Technology. 33(2006) no.5, S.xx-xx
  13. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.02
    0.020749543 = product of:
      0.0968312 = sum of:
        0.054539118 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.054539118 = score(doc=1094,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.036238287 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.036238287 = score(doc=1094,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.0060537956 = weight(_text_:information in 1094) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=1094,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.21428572 = coord(3/14)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  14. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.02
    0.019537285 = product of:
      0.09117399 = sum of:
        0.05579249 = weight(_text_:web in 3926) [ClassicSimilarity], result of:
          0.05579249 = score(doc=3926,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5769126 = fieldWeight in 3926, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.011415146 = weight(_text_:information in 3926) [ClassicSimilarity], result of:
          0.011415146 = score(doc=3926,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.023966359 = weight(_text_:retrieval in 3926) [ClassicSimilarity], result of:
          0.023966359 = score(doc=3926,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.21428572 = coord(3/14)
    
    Abstract
    Modern information retrieval systems advance user experience on the basis of concept-based rather than keyword-based query answering.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  15. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.01913742 = product of:
      0.066980965 = sum of:
        0.024409214 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.024409214 = score(doc=3283,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.012233062 = weight(_text_:information in 3283) [ClassicSimilarity], result of:
          0.012233062 = score(doc=3283,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 3283, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.020970564 = weight(_text_:retrieval in 3283) [ClassicSimilarity], result of:
          0.020970564 = score(doc=3283,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.028104367 = score(doc=3283,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
    Series
    Communications in computer and information science; 672
    Theme
    Semantic Web
  16. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.02
    0.018868804 = product of:
      0.066040814 = sum of:
        0.022496238 = weight(_text_:wide in 4205) [ClassicSimilarity], result of:
          0.022496238 = score(doc=4205,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.171337 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.024409214 = weight(_text_:web in 4205) [ClassicSimilarity], result of:
          0.024409214 = score(doc=4205,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 4205, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.00865008 = weight(_text_:information in 4205) [ClassicSimilarity], result of:
          0.00865008 = score(doc=4205,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16628155 = fieldWeight in 4205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.010485282 = weight(_text_:retrieval in 4205) [ClassicSimilarity], result of:
          0.010485282 = score(doc=4205,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.11697317 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
      0.2857143 = coord(4/14)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  17. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.02
    0.018802978 = product of:
      0.08774723 = sum of:
        0.057825863 = weight(_text_:web in 3934) [ClassicSimilarity], result of:
          0.057825863 = score(doc=3934,freq=22.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.59793836 = fieldWeight in 3934, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.008737902 = weight(_text_:information in 3934) [ClassicSimilarity], result of:
          0.008737902 = score(doc=3934,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 3934, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.021183468 = weight(_text_:retrieval in 3934) [ClassicSimilarity], result of:
          0.021183468 = score(doc=3934,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23632148 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.21428572 = coord(3/14)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    LCSH
    Information storage and retrieval
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
    Information storage and retrieval
    Theme
    Semantic Web
  18. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.02
    0.018600633 = product of:
      0.06510221 = sum of:
        0.016068742 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.016068742 = score(doc=4232,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.034870304 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.034870304 = score(doc=4232,freq=32.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.006673682 = weight(_text_:information in 4232) [ClassicSimilarity], result of:
          0.006673682 = score(doc=4232,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.128289 = fieldWeight in 4232, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.007489487 = weight(_text_:retrieval in 4232) [ClassicSimilarity], result of:
          0.007489487 = score(doc=4232,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.08355226 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.2857143 = coord(4/14)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
    Theme
    Semantic Web
  19. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.02
    0.018016124 = product of:
      0.06305643 = sum of:
        0.025709987 = weight(_text_:wide in 4659) [ClassicSimilarity], result of:
          0.025709987 = score(doc=4659,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.013948122 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
          0.013948122 = score(doc=4659,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.011415146 = weight(_text_:information in 4659) [ClassicSimilarity], result of:
          0.011415146 = score(doc=4659,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 4659, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.0119831795 = weight(_text_:retrieval in 4659) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=4659,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
      0.2857143 = coord(4/14)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
    Content
    Submitted to the Graduate Faculty of School of Information Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
  20. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.02
    0.017700171 = product of:
      0.082600795 = sum of:
        0.07321172 = weight(_text_:bibliothek in 2439) [ClassicSimilarity], result of:
          0.07321172 = score(doc=2439,freq=22.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.60177016 = fieldWeight in 2439, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.0040358636 = weight(_text_:information in 2439) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=2439,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 2439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 2439) [ClassicSimilarity], result of:
              0.016059639 = score(doc=2439,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Der Aufbau der Europäischen Digitalen Bibliothek wurde im Herbst 2007 auf soliden Grund gestellt: Mit der European Digital Library Foundation steht eine geschäftsfähige Organisation als Trägerin der Europäischen Digitalen Bibliothek zur Verfügung. Sie fungiert zunächst als Steuerungsgremium für das EU-finanzierte Projekt EDLnet und übernimmt sukzessive die Aufgaben, die für den Aufbau und die Weiterentwicklung der Europäischen Digitalen Bibliothek notwendig sind. Die Gründungsmitglieder sind zehn europäische Dachorganisationen aus den Bereichen Bibliothek, Archiv, audiovisuelle Sammlungen und Museen. Vorstandsmitglieder sind die Vorsitzende Elisabeth Niggemann (CENL) die Vize-Vorsitzende Martine de Boisdeffre (EURBICA), der Schatzmeister Edwin van Huis (FIAT) sowie Wim van Drimmelen, der Generaldirektor der Koninklijke Bibliotheek, der Nationalbibliothek der Niederlande, welche die Europäische Digitale Bibliothek hostet. Der Prototyp für die Europäische Digitale Bibliothek wird im Rahmen des EDLnet-Projekts entwickelt. Die erste Version des Prototyps wurde auf der internationalen Konferenz »One more step towards the European Digital Library« vorgestellt, die am 31. Januar und 1. Februar 2008 in der Deutschen Nationalbibliothek (DNB) in Frankfurt am Main stattfand. Die endgültige Version des Prototyps wird im November 2008 von der EU-Kommissarin für Informationsgesellschaft und Medien, Viviane Reding, in Paris vorgestellt werden. Dieser Prototyp wird direkten Zugang zu mindestens zwei Mio. digitalisierten Büchern, Fotografien, Karten, Tonaufzeichnungen, Filmaufnahmen und Archivalien aus Bibliotheken, Archiven, audiovisuellen Sammlungen und Museen Europas bieten.
    Content
    Darin u.a. "Interoperabilität als Kernstück - Technische und semantische Interoperabilität bilden somit das Kernstück für das Funktionieren der Europäischen Digitalen Bibliothek. Doch bevor Wege gefunden werden können, wie etwas funktionieren kann, muss zunächst einmal festgelegt werden, was funktionieren soll. Hierfür sind die Nutzeranforderungen das Maß aller Dinge, weshalb sich ein ganzes Arbeitspaket in EDLnet mit der Nutzersicht, den Nutzeranforderungen und der Nutzbarkeit der Europäischen Digitalen Bibliothek befasst, Anforderungen formuliert und diese im Arbeitspaket »Interoperabilität« umgesetzt werden. Für die Entscheidung, welche Inhalte wie präsentiert werden, sind jedoch nicht allein technische und semantische Fragestellungen zu klären, sondern auch ein Geschäftsmodell zu entwickeln, das festlegt, was die beteiligten Institutionen und Organisationen in welcher Form zu welchen Bedingungen zur Europäischen Digitalen Bibliothek beitragen. Auch das Geschäftsmodell wird Auswirkungen auf technische und semantische Interoperabilität haben und liefert die daraus abgeleiteten Anforderungen zur Umsetzung an das entsprechende Arbeitspaket. Im EDLnet-Projekt ist somit ein ständiger Arbeitskreislauf installiert, in welchem die Anforderungen an die Europäische Digitale Bibliothek formuliert, an das Interoperabilitäts-Arbeitspaket weitergegeben und dort umgesetzt werden. Diese Lösung wird wiederum an die Arbeitspakete »Nutzersicht« und »Geschäftsmodell« zurückgemeldet, getestet, kommentiert und für die Kommentare wiederum technische Lösungen gesucht. Dies ist eine Form des »rapid prototyping«, das hier zur Anwendung kommt, d. h. die Funktionalitäten werden schrittweise gemäß des Feedbacks der zukünftigen Nutzer sowie der Projektpartner erweitert und gleichzeitig wird der Prototyp stets lauffähig gehalten und bis zur Produktreife weiterentwickelt. Hierdurch verspricht man sich ein schnelles Ergebnis bei geringem Risiko einer Fehlentwicklung durch das ständige Feedback."
    Date
    22. 2.2009 19:10:56
    Theme
    Information Gateway

Years

Languages

  • e 186
  • d 54
  • pt 1
  • More… Less…

Types

  • a 174
  • el 64
  • m 15
  • x 8
  • s 7
  • r 5
  • n 2
  • p 2
  • More… Less…