Search (174 results, page 1 of 9)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"a"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.25
    0.25493872 = product of:
      0.71382844 = sum of:
        0.05490988 = product of:
          0.16472964 = sum of:
            0.16472964 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.16472964 = score(doc=306,freq=2.0), product of:
                0.25123185 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029633347 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.16472964 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.16472964 = score(doc=306,freq=2.0), product of:
            0.25123185 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029633347 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Kempf, A.O.; Baum, K.: Von der Ein-Datenbank-Suche zum verteilten Suchszenario : Zum Aufbau von Crosskonkordanzen zwischen der Fachklassifikation Sozialwissenschaften und der Dewey-Dezimalklassifikation (2013) 0.04
    0.04418917 = product of:
      0.20621613 = sum of:
        0.055185407 = weight(_text_:bibliothek in 1654) [ClassicSimilarity], result of:
          0.055185407 = score(doc=1654,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.4536013 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
        0.010089659 = weight(_text_:information in 1654) [ClassicSimilarity], result of:
          0.010089659 = score(doc=1654,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
        0.14094105 = weight(_text_:kongress in 1654) [ClassicSimilarity], result of:
          0.14094105 = score(doc=1654,freq=2.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.72490466 = fieldWeight in 1654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.078125 = fieldNorm(doc=1654)
      0.21428572 = coord(3/14)
    
    Content
    Folien eines Vortrages, 5. Kongress Bibliothek & Information Deutschland, Leipzig, 11.-14. März 2013.
  3. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.04
    0.04354609 = product of:
      0.12192906 = sum of:
        0.03856498 = weight(_text_:wide in 4379) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4379,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.020922182 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.020922182 = score(doc=4379,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.033111244 = weight(_text_:bibliothek in 4379) [ClassicSimilarity], result of:
          0.033111244 = score(doc=4379,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.27216077 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.01797477 = weight(_text_:retrieval in 4379) [ClassicSimilarity], result of:
          0.01797477 = score(doc=4379,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.01135588 = product of:
          0.03406764 = sum of:
            0.03406764 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.03406764 = score(doc=4379,freq=4.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.04
    0.035616066 = product of:
      0.12465622 = sum of:
        0.032137483 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.032137483 = score(doc=6061,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.052305456 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
          0.052305456 = score(doc=6061,freq=18.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5408555 = fieldWeight in 6061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.014268933 = weight(_text_:information in 6061) [ClassicSimilarity], result of:
          0.014268933 = score(doc=6061,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 6061, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.025944345 = weight(_text_:retrieval in 6061) [ClassicSimilarity], result of:
          0.025944345 = score(doc=6061,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 6061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.2857143 = coord(4/14)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
    Theme
    Semantic Web
  5. Niggemann, E.: Wer suchet, der findet? : Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek (2006) 0.03
    0.026833802 = product of:
      0.12522441 = sum of:
        0.051252894 = weight(_text_:elektronische in 5812) [ClassicSimilarity], result of:
          0.051252894 = score(doc=5812,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.36573824 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.066908754 = weight(_text_:bibliothek in 5812) [ClassicSimilarity], result of:
          0.066908754 = score(doc=5812,freq=6.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.54996234 = fieldWeight in 5812, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.0070627616 = weight(_text_:information in 5812) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=5812,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
      0.21428572 = coord(3/14)
    
    Abstract
    Elektronische Bibliothekskataloge und Bibliografien haben ihr Monopol bei der Suche nach Büchern, Aufsätzen, musikalischen Werken u. a. verloren. Globale Suchmaschinen sind starke Konkurrenten, und Bibliotheken müssen heute so planen, dass ihre Dienstleistungen auch morgen noch interessant sind. Die Deutsche Bibliothek (DDB) wird ihre traditionelle Katalogrecherche zu einem globalen, netzbasierten Informationssystem erweitern, das die Vorteile der neutralen, qualitätsbasierten Katalogsuche mit den Vorteilen moderner Suchmaschinen zu verbinden sucht. Dieser Beitrag beschäftigt sich mit der Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek. Weitere Entwicklungsstränge sollen nur kurz im Ausblick angerissen werden.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  6. Tennis, J.T.: Versioning concept schemes for persistent retrieval (2006) 0.02
    0.020908974 = product of:
      0.073181406 = sum of:
        0.025709987 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.025709987 = score(doc=1956,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.019725623 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
          0.019725623 = score(doc=1956,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2039694 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.0069903214 = weight(_text_:information in 1956) [ClassicSimilarity], result of:
          0.0069903214 = score(doc=1956,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1343758 = fieldWeight in 1956, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.020755477 = weight(_text_:retrieval in 1956) [ClassicSimilarity], result of:
          0.020755477 = score(doc=1956,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23154683 = fieldWeight in 1956, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
      0.2857143 = coord(4/14)
    
    Abstract
    Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or classification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a consequence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking. The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a "model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies" in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision. As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking system for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
    Source
    Bulletin of the American Society for Information Science and Technology. 33(2006) no.5, S.xx-xx
  7. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.02
    0.020749543 = product of:
      0.0968312 = sum of:
        0.054539118 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.054539118 = score(doc=1094,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.036238287 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.036238287 = score(doc=1094,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.0060537956 = weight(_text_:information in 1094) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=1094,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.21428572 = coord(3/14)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  8. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.02
    0.019537285 = product of:
      0.09117399 = sum of:
        0.05579249 = weight(_text_:web in 3926) [ClassicSimilarity], result of:
          0.05579249 = score(doc=3926,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5769126 = fieldWeight in 3926, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.011415146 = weight(_text_:information in 3926) [ClassicSimilarity], result of:
          0.011415146 = score(doc=3926,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 3926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
        0.023966359 = weight(_text_:retrieval in 3926) [ClassicSimilarity], result of:
          0.023966359 = score(doc=3926,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.21428572 = coord(3/14)
    
    Abstract
    Modern information retrieval systems advance user experience on the basis of concept-based rather than keyword-based query answering.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  9. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.02
    0.017700171 = product of:
      0.082600795 = sum of:
        0.07321172 = weight(_text_:bibliothek in 2439) [ClassicSimilarity], result of:
          0.07321172 = score(doc=2439,freq=22.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.60177016 = fieldWeight in 2439, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.0040358636 = weight(_text_:information in 2439) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=2439,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 2439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2439)
        0.0053532133 = product of:
          0.016059639 = sum of:
            0.016059639 = weight(_text_:22 in 2439) [ClassicSimilarity], result of:
              0.016059639 = score(doc=2439,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.15476047 = fieldWeight in 2439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2439)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Der Aufbau der Europäischen Digitalen Bibliothek wurde im Herbst 2007 auf soliden Grund gestellt: Mit der European Digital Library Foundation steht eine geschäftsfähige Organisation als Trägerin der Europäischen Digitalen Bibliothek zur Verfügung. Sie fungiert zunächst als Steuerungsgremium für das EU-finanzierte Projekt EDLnet und übernimmt sukzessive die Aufgaben, die für den Aufbau und die Weiterentwicklung der Europäischen Digitalen Bibliothek notwendig sind. Die Gründungsmitglieder sind zehn europäische Dachorganisationen aus den Bereichen Bibliothek, Archiv, audiovisuelle Sammlungen und Museen. Vorstandsmitglieder sind die Vorsitzende Elisabeth Niggemann (CENL) die Vize-Vorsitzende Martine de Boisdeffre (EURBICA), der Schatzmeister Edwin van Huis (FIAT) sowie Wim van Drimmelen, der Generaldirektor der Koninklijke Bibliotheek, der Nationalbibliothek der Niederlande, welche die Europäische Digitale Bibliothek hostet. Der Prototyp für die Europäische Digitale Bibliothek wird im Rahmen des EDLnet-Projekts entwickelt. Die erste Version des Prototyps wurde auf der internationalen Konferenz »One more step towards the European Digital Library« vorgestellt, die am 31. Januar und 1. Februar 2008 in der Deutschen Nationalbibliothek (DNB) in Frankfurt am Main stattfand. Die endgültige Version des Prototyps wird im November 2008 von der EU-Kommissarin für Informationsgesellschaft und Medien, Viviane Reding, in Paris vorgestellt werden. Dieser Prototyp wird direkten Zugang zu mindestens zwei Mio. digitalisierten Büchern, Fotografien, Karten, Tonaufzeichnungen, Filmaufnahmen und Archivalien aus Bibliotheken, Archiven, audiovisuellen Sammlungen und Museen Europas bieten.
    Content
    Darin u.a. "Interoperabilität als Kernstück - Technische und semantische Interoperabilität bilden somit das Kernstück für das Funktionieren der Europäischen Digitalen Bibliothek. Doch bevor Wege gefunden werden können, wie etwas funktionieren kann, muss zunächst einmal festgelegt werden, was funktionieren soll. Hierfür sind die Nutzeranforderungen das Maß aller Dinge, weshalb sich ein ganzes Arbeitspaket in EDLnet mit der Nutzersicht, den Nutzeranforderungen und der Nutzbarkeit der Europäischen Digitalen Bibliothek befasst, Anforderungen formuliert und diese im Arbeitspaket »Interoperabilität« umgesetzt werden. Für die Entscheidung, welche Inhalte wie präsentiert werden, sind jedoch nicht allein technische und semantische Fragestellungen zu klären, sondern auch ein Geschäftsmodell zu entwickeln, das festlegt, was die beteiligten Institutionen und Organisationen in welcher Form zu welchen Bedingungen zur Europäischen Digitalen Bibliothek beitragen. Auch das Geschäftsmodell wird Auswirkungen auf technische und semantische Interoperabilität haben und liefert die daraus abgeleiteten Anforderungen zur Umsetzung an das entsprechende Arbeitspaket. Im EDLnet-Projekt ist somit ein ständiger Arbeitskreislauf installiert, in welchem die Anforderungen an die Europäische Digitale Bibliothek formuliert, an das Interoperabilitäts-Arbeitspaket weitergegeben und dort umgesetzt werden. Diese Lösung wird wiederum an die Arbeitspakete »Nutzersicht« und »Geschäftsmodell« zurückgemeldet, getestet, kommentiert und für die Kommentare wiederum technische Lösungen gesucht. Dies ist eine Form des »rapid prototyping«, das hier zur Anwendung kommt, d. h. die Funktionalitäten werden schrittweise gemäß des Feedbacks der zukünftigen Nutzer sowie der Projektpartner erweitert und gleichzeitig wird der Prototyp stets lauffähig gehalten und bis zur Produktreife weiterentwickelt. Hierdurch verspricht man sich ein schnelles Ergebnis bei geringem Risiko einer Fehlentwicklung durch das ständige Feedback."
    Date
    22. 2.2009 19:10:56
    Theme
    Information Gateway
  10. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.02
    0.016611824 = product of:
      0.077521846 = sum of:
        0.03856498 = weight(_text_:wide in 3393) [ClassicSimilarity], result of:
          0.03856498 = score(doc=3393,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 3393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
        0.013536699 = weight(_text_:information in 3393) [ClassicSimilarity], result of:
          0.013536699 = score(doc=3393,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2602176 = fieldWeight in 3393, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
        0.025420163 = weight(_text_:retrieval in 3393) [ClassicSimilarity], result of:
          0.025420163 = score(doc=3393,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 3393, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.21428572 = coord(3/14)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
    Theme
    Information Gateway
  11. Gabler, S.: Thesauri - a Toolbox for Information Retrieval (2023) 0.02
    0.01632566 = product of:
      0.07618641 = sum of:
        0.044148326 = weight(_text_:bibliothek in 114) [ClassicSimilarity], result of:
          0.044148326 = score(doc=114,freq=2.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.36288103 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.008071727 = weight(_text_:information in 114) [ClassicSimilarity], result of:
          0.008071727 = score(doc=114,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
        0.023966359 = weight(_text_:retrieval in 114) [ClassicSimilarity], result of:
          0.023966359 = score(doc=114,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=114)
      0.21428572 = coord(3/14)
    
    Source
    Bibliothek: Forschung und Praxis. 47(2023) H.2, S.189-199
  12. Nicholson, D.; Wake, S.: HILT: subject retrieval in a distributed environment (2003) 0.02
    0.015008343 = product of:
      0.07003894 = sum of:
        0.03856498 = weight(_text_:wide in 3810) [ClassicSimilarity], result of:
          0.03856498 = score(doc=3810,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 3810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3810)
        0.0060537956 = weight(_text_:information in 3810) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=3810,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 3810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3810)
        0.025420163 = weight(_text_:retrieval in 3810) [ClassicSimilarity], result of:
          0.025420163 = score(doc=3810,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 3810, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3810)
      0.21428572 = coord(3/14)
    
    Abstract
    The HILT High Level Thesaurus Project aims to study and report an the problern of cross-searching and browsing by subject across a range of communities, services, and service or resource types in the UK given the wide range of subject schemes and associated practices in place in the communities in question (Libraries, Museums, Archives, and Internet Services) and taking the international context into consideration. The paper reports an progess to date, focusing particularly an the inter-community consensus reached at a recent Stakeholder Workshop.
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  13. Fiala, S.: Deutscher Bibliothekartag Leipzig 2007 : Sacherschließung - Informationsdienstleistung nach Mass (2007) 0.01
    0.014994896 = product of:
      0.06997618 = sum of:
        0.023413187 = weight(_text_:bibliothek in 415) [ClassicSimilarity], result of:
          0.023413187 = score(doc=415,freq=4.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.19244674 = fieldWeight in 415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0234375 = fieldNorm(doc=415)
        0.00428068 = weight(_text_:information in 415) [ClassicSimilarity], result of:
          0.00428068 = score(doc=415,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.08228803 = fieldWeight in 415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=415)
        0.042282317 = weight(_text_:kongress in 415) [ClassicSimilarity], result of:
          0.042282317 = score(doc=415,freq=2.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.2174714 = fieldWeight in 415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0234375 = fieldNorm(doc=415)
      0.21428572 = coord(3/14)
    
    Content
    ""Sacherschließung - Informationsdienstleistung nach Maß": unter diesem Titel fand am 3. Leipziger Kongress für Information und Bibliothek ("Information und Ethik") eine sehr aufschlussreiche Vortragsreihe statt. Neue Projekte der Vernetzung unterschiedlichst erschlossener Bestände wurden vorgestellt. Auch die Frage, inwieweit man die Nutzerinnen und Nutzer in die Erschließung einbinden kann, wurde behandelt. Die Arbeit der Bibliothekare kann wertvolle Ausgangssituationen für alternative Methoden bieten. Das Zusammenwirken von intellektueller und maschineller Erschließung wird in Zukunft eine große Rolle spielen. Ein Ausweg, um die Erschließung der ständig wachsenden Informationsquellen zu ermöglichen, könnte eine arbeitsteilige Erschließung und eine Kooperation mit anderen Informationseinrichtungen darstellen. Im Mittelpunkt all dieser Überlegungen standen die Heterogenitätsprobleme, die sich durch unterschiedliche Erschließungsregeln, verschiedene Arbeitsinstrumente, verschiedene Sprachen und durch die unterschiedliche Bedeutung der Begriffe ergeben können. Der Nachmittag begann mit einem konkreten Beispiel: "Zum Stand der Heterogenitätsbehandlung in vascoda" (Philipp Mayr, Bonn und Anne-Kathrin Walter, Bonn). Das Wissenschaftsportal vascoda beinhaltet verschiedene Fachportale, und es kann entweder interdisziplinär oder fachspezifisch gesucht werden. Durch die verschiedenen Informationsangebote, die in einem Fachportal vorhanden sind und die in dem Wissenschaftsportal vascoda zusammengefasst sind, entsteht semantische Heterogenität. Oberstes Ziel ist somit die Heterogenitätsbehandlung. Die Erstellung von Crosskonkordanzen (zwischen Indexierungssprachen innerhalb eines Fachgebiets und zwischen Indexierungssprachen unterschiedlicher Fachgebiete) und dem sogenannten Heterogenitätsservice (Term-Umschlüsselungs-Dienst) wurden anhand dieses Wissenschaftsportals vorgestellt. "Crosskonkordanzen sind gerichtete, relevanzbewertete Relationen zwischen Termen zweier Thesauri, Klassifikationen oder auch anderer kontrollierter Vokabulare." Im Heterogenitätsservice soll die Suchanfrage so transformiert werden, dass sie alle relevanten Dokumente in den verschiedenen Datenbanken erreicht. Bei der Evaluierung der Crosskonkordanzen stellt sich die Frage der Zielgenauigkeit der Relationen, sowie die Frage nach der Relevanz der durch die Crosskonkordanz zusätzlich gefundenen Treffer. Drei Schritte der Evaluation werden durchgeführt: Zum einen mit natürlicher Sprache in der Freitextsuche, dann übersetzt in Deskriptoren in der Schlagwortsuche und zuletzt mit Deskriptoren in der Schlagwortsuche mit Einsatz der Crosskonkordanzen. Im Laufe des Sommers werden erste Ergebnisse der Evaluation der Crosskonkordanzen erwartet.
    Der dritte Vortrag dieser Vortragsreihe mit dem Titel: "Anfragetransfers zur Integration von Internetquellen in digitalen Bibliotheken auf der Grundlage statistischer Termrelationen" (Robert Strötgen, Hildesheim) zeigte eine maschinelle Methode der Integration ausgewählter, inhaltlich aber nicht erschlossener Internetdokumentbestände in digitale Bibliotheken. Das Zusammentreffen inhaltlich gut erschlossener Fachdatenbanken mit Internetdokumenten steht im Mittelpunkt dieses Forschungsprojekts. "Sollen ausgewählte fachliche Internetdokumente zur Ausweitung einer Recherche in einer digitalen Bibliothek integriert werden, ist dies entweder durch eine Beschränkung auf hochwertige und aufwändig erstellte Clearinghouses oder durch eine "naive" Weiterleitung der Benutzeranfrage möglich." Weiter heißt es in der Projektbeschreibung: "Unter Anwendung von Methoden des maschinellen Lernens werden semantische Relationen zwischen Klassen verschiedener Ontologien erstellt, die Übergänge zwischen diesen Ontologien ermöglichen. Besondere Bedeutung für dieses Forschungsvorhaben hat der Transfer zwischen Ontologien und Freitexttermen." Ausgehend vom Projekt CARMEN werden in diesem Projekt automatisiert - durch statistisches maschinelles Lernen - semantische Relationen berechnet. So wird eine Benutzeranfrage, die mittels Thesaurus erfolgte, für eine Abfrage in Internetdokumentbeständen transformiert.
  14. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.01
    0.013971718 = product of:
      0.06520135 = sum of:
        0.03019857 = weight(_text_:web in 5864) [ClassicSimilarity], result of:
          0.03019857 = score(doc=5864,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.3122631 = fieldWeight in 5864, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.0050448296 = weight(_text_:information in 5864) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=5864,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.029957948 = weight(_text_:retrieval in 5864) [ClassicSimilarity], result of:
          0.029957948 = score(doc=5864,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 5864, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
      0.21428572 = coord(3/14)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
    Content
    Vgl.: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval. Vgl. auch: http://semantic-web-journal.net/content/similarity-based-knowledge-graph-queries-recommendation-retrieval-1.
    Source
    Semantic Web. 10(2019) 6, S.1007-1037
  15. Dunsire, G.: Enhancing information services using machine-to-machine terminology services (2011) 0.01
    0.013725927 = product of:
      0.064054325 = sum of:
        0.024409214 = weight(_text_:web in 1805) [ClassicSimilarity], result of:
          0.024409214 = score(doc=1805,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 1805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
        0.009988253 = weight(_text_:information in 1805) [ClassicSimilarity], result of:
          0.009988253 = score(doc=1805,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 1805, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
        0.029656855 = weight(_text_:retrieval in 1805) [ClassicSimilarity], result of:
          0.029656855 = score(doc=1805,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33085006 = fieldWeight in 1805, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1805)
      0.21428572 = coord(3/14)
    
    Abstract
    This paper describes the basic concepts of terminology services and their role in information retrieval interfaces. Terminology services are consumed by other software applications using machine-to-machine protocols, rather than directly by end-users. An example of a terminology service is the pilot developed by the High Level Thesaurus (HILT) project which has successfully demonstrated its potential for enhancing subject retrieval in operational services. Examples of enhancements in three such services are given. The paper discusses the future development of terminology services in relation to the Semantic Web.
  16. Neubauer, G.: Visualization of typed links in linked data (2017) 0.01
    0.013537618 = product of:
      0.09476332 = sum of:
        0.045449268 = weight(_text_:wide in 3912) [ClassicSimilarity], result of:
          0.045449268 = score(doc=3912,freq=4.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.34615302 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.04931406 = weight(_text_:web in 3912) [ClassicSimilarity], result of:
          0.04931406 = score(doc=3912,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.5099235 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Theme
    Semantic Web
  17. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.01
    0.013451662 = product of:
      0.06277442 = sum of:
        0.036238287 = weight(_text_:web in 241) [ClassicSimilarity], result of:
          0.036238287 = score(doc=241,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 241, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
        0.00856136 = weight(_text_:information in 241) [ClassicSimilarity], result of:
          0.00856136 = score(doc=241,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 241, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
        0.01797477 = weight(_text_:retrieval in 241) [ClassicSimilarity], result of:
          0.01797477 = score(doc=241,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
      0.21428572 = coord(3/14)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
    Footnote
    Simultaneously published as Knitting the Semantic Web
    Theme
    Semantic Web
  18. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.012580475 = product of:
      0.058708884 = sum of:
        0.042278 = weight(_text_:web in 759) [ClassicSimilarity], result of:
          0.042278 = score(doc=759,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43716836 = fieldWeight in 759, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.0070627616 = weight(_text_:information in 759) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=759,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.009368123 = product of:
          0.028104367 = sum of:
            0.028104367 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.028104367 = score(doc=759,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  19. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.01
    0.012563232 = product of:
      0.058628418 = sum of:
        0.019725623 = weight(_text_:web in 3391) [ClassicSimilarity], result of:
          0.019725623 = score(doc=3391,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.2039694 = fieldWeight in 3391, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.01210759 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
          0.01210759 = score(doc=3391,freq=18.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274568 = fieldWeight in 3391, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
        0.026795205 = weight(_text_:retrieval in 3391) [ClassicSimilarity], result of:
          0.026795205 = score(doc=3391,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.29892567 = fieldWeight in 3391, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
      0.21428572 = coord(3/14)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.272-281
  20. Tudhope, D.; Binding, C.: Toward terminology services : experiences with a pilot Web service thesaurus browser (2006) 0.01
    0.0107538905 = product of:
      0.050184824 = sum of:
        0.03416578 = weight(_text_:web in 1955) [ClassicSimilarity], result of:
          0.03416578 = score(doc=1955,freq=12.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35328537 = fieldWeight in 1955, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
        0.0040358636 = weight(_text_:information in 1955) [ClassicSimilarity], result of:
          0.0040358636 = score(doc=1955,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.0775819 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
        0.0119831795 = weight(_text_:retrieval in 1955) [ClassicSimilarity], result of:
          0.0119831795 = score(doc=1955,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.13368362 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
      0.21428572 = coord(3/14)
    
    Abstract
    Dublin Core recommends controlled terminology for the subject of a resource. Knowledge organization systems (KOS), such as classifications, gazetteers, taxonomies and thesauri, provide controlled vocabularies that organize and structure concepts for indexing, classifying, browsing and search. For example, a thesaurus employs a set of standard semantic relationships (ISO 2788, ISO 5964), and major thesauri have a large entry vocabulary of terms considered equivalent for retrieval purposes. Many KOS have been made available for Web-based access. However, they are often not fully integrated into indexing and search systems and the full potential for networked and programmatic access remains untapped. The lack of standardized access and interchange formats impedes wider use of KOS resources. We developed a Web demonstrator (www.comp.glam.ac.uk/~FACET/webdemo/) for the FACET project (www.comp.glam.ac.uk/~facet/facetproject.html) that explored thesaurus-based query expansion with the Getty Art and Architecture Thesaurus. A Web demonstrator was implemented via Active Server Pages (ASP) with server-side scripting and compiled server-side components for database access, and cascading style sheets for presentation. The browser-based interactive interface permits dynamic control of query term expansion. However, being based on a custom thesaurus representation and API, the techniques cannot be applied directly to thesauri in other formats on the Web. General programmatic access requires commonly agreed protocols, for example, building on Web and Grid services. The development of common KOS representation formats and service protocols are closely linked. Linda Hill and colleagues argued in 2002 for a general KOS service protocol from which protocols for specific types of KOS can be derived. Thus, in the future, a combination of thesaurus and query protocols might permit a thesaurus to be used with a choice of search tools on various kinds of databases. Service-oriented architectures bring an opportunity for moving toward a clearer separation of interface components from the underlying data sources. In our view, basing distributed protocol services on the atomic elements of thesaurus data structures and relationships is not necessarily the best approach because client operations that require multiple client-server calls would carry too much overhead. This would limit the interfaces that could be offered by applications following such a protocol. Advanced interactive interfaces require protocols that group primitive thesaurus data elements (via their relationships) into composites to achieve reasonable response.
    Source
    Bulletin of the American Society for Information Science and Technology. 33(2006) no.5, S.xx-xx

Years

Languages

  • e 131
  • d 42
  • pt 1
  • More… Less…