Search (305 results, page 1 of 16)

  • × theme_ss:"Semantische Interoperabilität"
  1. Schubert, C.; Kinkeldey, C.; Reich, H.: Handbuch Datenbankanwendung zur Wissensrepräsentation im Verbundprojekt DeCOVER (2006) 0.20
    0.2031126 = product of:
      0.3808361 = sum of:
        0.037118454 = weight(_text_:23 in 4256) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4256,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.037118454 = weight(_text_:23 in 4256) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4256,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.020074176 = weight(_text_:und in 4256) [ClassicSimilarity], result of:
          0.020074176 = score(doc=4256,freq=4.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.27704588 = fieldWeight in 4256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.037118454 = weight(_text_:23 in 4256) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4256,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.17086051 = weight(_text_:sonstiges in 4256) [ClassicSimilarity], result of:
          0.17086051 = score(doc=4256,freq=2.0), product of:
            0.25138858 = queryWeight, product of:
              7.689554 = idf(docFreq=54, maxDocs=44218)
              0.032692216 = queryNorm
            0.679667 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.689554 = idf(docFreq=54, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.038744405 = weight(_text_:zur in 4256) [ClassicSimilarity], result of:
          0.038744405 = score(doc=4256,freq=4.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.38489062 = fieldWeight in 4256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.00756127 = weight(_text_:in in 4256) [ClassicSimilarity], result of:
          0.00756127 = score(doc=4256,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.17003182 = fieldWeight in 4256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
        0.03224037 = weight(_text_:der in 4256) [ClassicSimilarity], result of:
          0.03224037 = score(doc=4256,freq=10.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.44148692 = fieldWeight in 4256, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=4256)
      0.53333336 = coord(8/15)
    
    Abstract
    Die Datenbank basierte Objektartenbeschreibung dient zur eigenschaftsbasierten Aufnahme aller Objektarten der Kataloge BNTK, CLC; GMES M 2.1, ATKIS und des DeCOVER Vorschlags. Das Ziel der Datenbankanwendung besteht in der 'manuellen' Beziehungsauswertung und Darstellung der gesamten Objektarten bezogen auf die erstellte Wissensrepräsentation. Anhand einer hierarchisch strukturierten Wissensrepräsentation lassen sich mit Ontologien Überführungen von Objektarten verwirklichen, die im Sinne der semantischen Interoperabilität als Zielstellung in dem Verbundprojekt DeCOVER besteht.
    Date
    29. 1.2011 18:45:23
    Source
    http://www.eftas.com/www.de_cover.de/Nutzer/Sonstiges/tutorial_db-wr_v3.1.pdf
  2. Balakrishnan, U.; Peters, S.; Voß, J.: Coli-conc : eine Infrastruktur zur Nutzung und Erstellung von Konkordanzen (2021) 0.10
    0.09749259 = product of:
      0.20891269 = sum of:
        0.032478645 = weight(_text_:23 in 368) [ClassicSimilarity], result of:
          0.032478645 = score(doc=368,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.032478645 = weight(_text_:23 in 368) [ClassicSimilarity], result of:
          0.032478645 = score(doc=368,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.035129808 = weight(_text_:und in 368) [ClassicSimilarity], result of:
          0.035129808 = score(doc=368,freq=16.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.4848303 = fieldWeight in 368, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.032478645 = weight(_text_:23 in 368) [ClassicSimilarity], result of:
          0.032478645 = score(doc=368,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.04152051 = weight(_text_:zur in 368) [ClassicSimilarity], result of:
          0.04152051 = score(doc=368,freq=6.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.4124687 = fieldWeight in 368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.0066161114 = weight(_text_:in in 368) [ClassicSimilarity], result of:
          0.0066161114 = score(doc=368,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.14877784 = fieldWeight in 368, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
        0.028210325 = weight(_text_:der in 368) [ClassicSimilarity], result of:
          0.028210325 = score(doc=368,freq=10.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.38630107 = fieldWeight in 368, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=368)
      0.46666667 = coord(7/15)
    
    Abstract
    coli-conc ist eine Dienstleistung der Verbundzentrale des Gemeinsamen Bibliotheksverbundes (VZG). Sie stellt webbasierte Dienste für einen effektiveren Austausch von Wissensorganisationssystemen und für die effiziente Erstellung und Wartung von Mappings zur Verfügung. Der Schwerpunkt liegt auf den im deutschsprachigen Raum verbreiteten bibliothekarischen Klassifikationen und Normdateien, vor allem den bedeutenden Universalklassifikationen wie Dewey Dezimalklassifikation (DDC), Regensburger Verbundklassifikation (RVK), Basisklassifikation (BK) und den Sachgruppen der Deutschen Nationalbibliografie (SDNB). Dieser Bericht beschreibt den Hintergrund, die Architektur und die Funktionalitäten von coli-conc sowie das Herzstück der Infrastruktur - das Mapping-Tool Cocoda. Außerdem wird auf Maßnahmen zur Qualitätssicherung eingegangen und ein Einblick in das neue Mapping-Verfahren mit dem Konzept- Hub gewährt.
    Date
    23. 9.2021 16:09:53
    Series
    Bibliotheks- und Informationspraxis; 70
    Source
    Qualität in der Inhaltserschließung. Hrsg.: M. Franke-Maier, u.a
  3. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.09
    0.08731924 = product of:
      0.2619577 = sum of:
        0.07874013 = weight(_text_:23 in 126) [ClassicSimilarity], result of:
          0.07874013 = score(doc=126,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.6720112 = fieldWeight in 126, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.07874013 = weight(_text_:23 in 126) [ClassicSimilarity], result of:
          0.07874013 = score(doc=126,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.6720112 = fieldWeight in 126, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.07874013 = weight(_text_:23 in 126) [ClassicSimilarity], result of:
          0.07874013 = score(doc=126,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.6720112 = fieldWeight in 126, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.008019937 = weight(_text_:in in 126) [ClassicSimilarity], result of:
          0.008019937 = score(doc=126,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.017717376 = product of:
          0.05315213 = sum of:
            0.05315213 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.05315213 = score(doc=126,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.33333334 = coord(1/3)
      0.33333334 = coord(5/15)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
    Date
    20. 1.2008 17:42:23
  4. Neubauer, G.: Visualization of typed links in linked data (2017) 0.09
    0.08524386 = product of:
      0.15983222 = sum of:
        0.023199033 = weight(_text_:23 in 3912) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3912,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.023199033 = weight(_text_:23 in 3912) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3912,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.017743232 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.017743232 = score(doc=3912,freq=8.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.023199033 = weight(_text_:23 in 3912) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3912,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.01712277 = weight(_text_:zur in 3912) [ClassicSimilarity], result of:
          0.01712277 = score(doc=3912,freq=2.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.17009923 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.0057878923 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.0057878923 = score(doc=3912,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.02548825 = weight(_text_:der in 3912) [ClassicSimilarity], result of:
          0.02548825 = score(doc=3912,freq=16.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.34902605 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.02409299 = product of:
          0.07227897 = sum of:
            0.07227897 = weight(_text_:datenverarbeitung in 3912) [ClassicSimilarity], result of:
              0.07227897 = score(doc=3912,freq=2.0), product of:
                0.2068191 = queryWeight, product of:
                  6.326249 = idf(docFreq=214, maxDocs=44218)
                  0.032692216 = queryNorm
                0.34947917 = fieldWeight in 3912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.326249 = idf(docFreq=214, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3912)
          0.33333334 = coord(1/3)
      0.53333336 = coord(8/15)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Date
    5. 6.2016 17:23:26
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  5. Klasing, M.: Crosskonkordanzen als Möglichkeit zur Heterogenitätsbehandlung : dargestellt am Projekt CrissCross (2008) 0.08
    0.0783045 = product of:
      0.16779536 = sum of:
        0.023199033 = weight(_text_:23 in 2460) [ClassicSimilarity], result of:
          0.023199033 = score(doc=2460,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 2460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.023199033 = weight(_text_:23 in 2460) [ClassicSimilarity], result of:
          0.023199033 = score(doc=2460,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 2460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.026614847 = weight(_text_:und in 2460) [ClassicSimilarity], result of:
          0.026614847 = score(doc=2460,freq=18.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.3673144 = fieldWeight in 2460, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.023199033 = weight(_text_:23 in 2460) [ClassicSimilarity], result of:
          0.023199033 = score(doc=2460,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 2460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.03424554 = weight(_text_:zur in 2460) [ClassicSimilarity], result of:
          0.03424554 = score(doc=2460,freq=8.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.34019846 = fieldWeight in 2460, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.008841151 = weight(_text_:in in 2460) [ClassicSimilarity], result of:
          0.008841151 = score(doc=2460,freq=14.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.19881277 = fieldWeight in 2460, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
        0.028496731 = weight(_text_:der in 2460) [ClassicSimilarity], result of:
          0.028496731 = score(doc=2460,freq=20.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.390223 = fieldWeight in 2460, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2460)
      0.46666667 = coord(7/15)
    
    Abstract
    In Zeiten, in denen der schnelle und einfache Zugriff auf strukturierte und umfangreiche Datenbestände zunehmend an Bedeutung gewinnt, stellt die Heterogenität eben dieser Bestände ein großes Problem dar. Die meisten Bibliothekskataloge, Datenbanken usw. sind inhaltlich durch unterschiedliche Instrumente erschlossen, was für den Benutzer eine gewisse Einarbeitungszeit in jedes Erschließungsinstrument sowie die Notwendigkeit zur wiederholten und neu formulierten Suche in den unterschiedlich erschlossenen Datenbeständen bedeutet. Als Möglichkeit zur Behandlung der Heterogenitätsproblematik kommen beispielsweise Crosskonkordanzen in Betracht, durch welche semantisch übereinstimmende Schlagwörter, Deskriptoren bzw. Notationen verschiedener Erschließungsinstrumente intellektuell miteinander verbunden werden, so dass mit identischer Suchfrage mehrere unterschiedlich erschlossene Datenbestände durchsucht werden können. In der vorliegenden Arbeit soll die Problematik heterogener Datenbestände sowie die Behandlungsmöglichkeit durch Crosskonkordanzen beschrieben werden. Die konkretisierende Darstellung erfolgt anhand des Projektes CrissCross, einem gemeinschaftlichen Projekt der Deutschen Nationalbibliothek und der Fachhochschule Köln, in dem Crosskonkordanzen zwischen den Erschließungsinstrumenten SWD, DDC sowie LCSH und RAMEAU erstellt werden. Besonderheiten des Projektes CrissCross sind neben der Multilingualität und der Verbindung von verbalen und klassifikatorischen Erschließungsinstrumenten auch eine intellektuelle Gewichtung der Beziehungsstärke zweier verknüpfter Terme, die so genannte Determiniertheit. Neben den integrierten Erschließungsinstrumenten wird auch die konkrete Vorgehensweise ihrer Verbindung erläutert. Des Weiteren werden die Problemfelder des Projektes und im Besonderen denkbare Einsatzmöglichkeiten der Projektergebnisse vorgestellt, die wesentlich zur Lösung der Heterogenitätsproblematik und somit zu einer Verbesserung des Retrievals für den Benutzer beitragen können.
    Date
    23. 2.2005 10:27:09
  6. Hubrich, J.: Thematische Suche in heterogenen Informationsräumen (2010) 0.08
    0.075564966 = product of:
      0.16192493 = sum of:
        0.023199033 = weight(_text_:23 in 4377) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4377,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.023199033 = weight(_text_:23 in 4377) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4377,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.029423824 = weight(_text_:und in 4377) [ClassicSimilarity], result of:
          0.029423824 = score(doc=4377,freq=22.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.40608138 = fieldWeight in 4377, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.023199033 = weight(_text_:23 in 4377) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4377,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.024215251 = weight(_text_:zur in 4377) [ClassicSimilarity], result of:
          0.024215251 = score(doc=4377,freq=4.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.24055663 = fieldWeight in 4377, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.007472136 = weight(_text_:in in 4377) [ClassicSimilarity], result of:
          0.007472136 = score(doc=4377,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.16802745 = fieldWeight in 4377, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
        0.031216605 = weight(_text_:der in 4377) [ClassicSimilarity], result of:
          0.031216605 = score(doc=4377,freq=24.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.42746788 = fieldWeight in 4377, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4377)
      0.46666667 = coord(7/15)
    
    Abstract
    Wissensorganisationssysteme unterstützen inhaltsbezogene Recherche- und Explorationsprozesse, indem sie a priori Beziehungen zwischen Begriffen ausweisen und einen standardisierten Zugriff auf Informationen ermöglichen. In heterogen erschlossenen Informationsräumen ist ihre Funktionalität indes eingeschränkt, da sie lediglich die Suche nach den Informationsressourcen, die mit ihnen erschlossen wurden, effizienter gestalten. Dokumente, deren inhaltliche Beschreibung auf der Grundlage von sprachlich, strukturell und typologisch unterschiedlichen Begriffssystemen erfolgte, werden hingegen nicht gefunden. Dies führt zu einem unzureichenden Recall und macht zusätzliche Recherchen notwendig. Mittels der Herstellung von Verbindungen zwischen Begriffen unterschiedlicher Dokumentationssprachen wird der standardisierte Zugriff auf Informationen erweitert und das Potential einzelner Erschließungsinstrumente über Teilmengen hinaus nutzbar. Vor dem Hintergrund, über welche inhaltlichen Qualitäten die erstellten Links verfügen, lassen sich drei Arten semantischer Interoperabilität unterscheiden, die mit jeweils anderen Implikationen für Retrievalszenarien einhergehen: wortbasierte, begriffliche und differenzierte Interoperabilität.
    In dem von der Deutschen Forschungsgemeinschaft (DFG) geförderten und von der Deutschen Nationalbibliothek (DNB) in Kooperation mit der Fachhochschule Köln durchgeführten Projekt CrissCross wird eine begriffliche und differenzierte Interoperabilität zwischen der Schlagwortnormdatei (SWD) und der Dewey- Dezimalklassifikation (DDC) realisiert. In diesem Beitrag wird unter Berücksichtigung der Charakteristika der beiden Begriffssysteme dargelegt, welche Möglichkeiten die SWD-DDC-Mappings sowohl zur Unterstützung von inhaltsbezogenen Such- und Explorationsprozessen als auch zur Strukturierung von Treffermengen bieten.
    Date
    20. 3.2011 13:23:08
    Series
    Schriften der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare (VÖB); Band 7
    Source
    ¬The Ne(x)t generation: das Angebot der Bibliotheken; 30. Österreichischer Bibliothekartag, Graz, 15.-18.9.2009. Hrsg.: Ute Bergner u. Erhard Göbel
  7. Ehrig, M.; Studer, R.: Wissensvernetzung durch Ontologien (2006) 0.07
    0.0718816 = product of:
      0.15403199 = sum of:
        0.023199033 = weight(_text_:23 in 5901) [ClassicSimilarity], result of:
          0.023199033 = score(doc=5901,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.023199033 = weight(_text_:23 in 5901) [ClassicSimilarity], result of:
          0.023199033 = score(doc=5901,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.019837536 = weight(_text_:und in 5901) [ClassicSimilarity], result of:
          0.019837536 = score(doc=5901,freq=10.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.27378 = fieldWeight in 5901, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.023199033 = weight(_text_:23 in 5901) [ClassicSimilarity], result of:
          0.023199033 = score(doc=5901,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.029657507 = weight(_text_:zur in 5901) [ClassicSimilarity], result of:
          0.029657507 = score(doc=5901,freq=6.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.2946205 = fieldWeight in 5901, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.009451588 = weight(_text_:in in 5901) [ClassicSimilarity], result of:
          0.009451588 = score(doc=5901,freq=16.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21253976 = fieldWeight in 5901, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
        0.02548825 = weight(_text_:der in 5901) [ClassicSimilarity], result of:
          0.02548825 = score(doc=5901,freq=16.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.34902605 = fieldWeight in 5901, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5901)
      0.46666667 = coord(7/15)
    
    Abstract
    In der Informatik sind Ontologien formale Modelle eines Anwendungsbereiches, die die Kommunikation zwischen menschlichen und/oder maschinellen Akteuren unterstützen und damit den Austausch und das Teilen von Wissen in Unternehmen erleichtern. Ontologien zur strukturierten Darstellung von Wissen zu nutzen hat deshalb in den letzten Jahren zunehmende Verbreitung gefunden. Schon heute existieren weltweit tausende Ontologien. Um Interoperabilität zwischen darauf aufbauenden Softwareagenten oder Webservices zu ermöglichen, ist die semantische Integration der Ontologien eine zwingendnotwendige Vorraussetzung. Wie man sieh leicht verdeutlichen kann, ist die rein manuelle Erstellung der Abbildungen ab einer bestimmten Größe. Komplexität und Veränderungsrate der Ontologien nicht mehr ohne weiteres möglich. Automatische oder semiautomatische Technologien müssen den Nutzer darin unterstützen. Das Integrationsproblem beschäftigt Forschung und Industrie schon seit vielen Jahren z. B. im Bereich der Datenbankintegration. Neu ist jedoch die Möglichkeit komplexe semantische Informationen. wie sie in Ontologien vorhanden sind, einzubeziehen. Zur Ontologieintegration wird in diesem Kapitel ein sechsstufiger genereller Prozess basierend auf den semantischen Strukturen eingeführt. Erweiterungen beschäftigen sich mit der Effizienz oder der optimalen Nutzereinbindung in diesen Prozess. Außerdem werden zwei Anwendungen vorgestellt, in denen dieser Prozess erfolgreich umgesetzt wurde. In einem abschließenden Fazit werden neue aktuelle Trends angesprochen. Da die Ansätze prinzipiell auf jedes Schema übertragbar sind, das eine semantische Basis enthält, geht der Einsatzbereich dieser Forschung weit über reine Ontologieanwendungen hinaus.
    Date
    13. 8.2006 19:43:23
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  8. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.06
    0.061714064 = product of:
      0.13224442 = sum of:
        0.023199033 = weight(_text_:23 in 6071) [ClassicSimilarity], result of:
          0.023199033 = score(doc=6071,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.023199033 = weight(_text_:23 in 6071) [ClassicSimilarity], result of:
          0.023199033 = score(doc=6071,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.021730933 = weight(_text_:und in 6071) [ClassicSimilarity], result of:
          0.021730933 = score(doc=6071,freq=12.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.29991096 = fieldWeight in 6071, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.023199033 = weight(_text_:23 in 6071) [ClassicSimilarity], result of:
          0.023199033 = score(doc=6071,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.01712277 = weight(_text_:zur in 6071) [ClassicSimilarity], result of:
          0.01712277 = score(doc=6071,freq=2.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.17009923 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.008185315 = weight(_text_:in in 6071) [ClassicSimilarity], result of:
          0.008185315 = score(doc=6071,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18406484 = fieldWeight in 6071, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.015608302 = weight(_text_:der in 6071) [ClassicSimilarity], result of:
          0.015608302 = score(doc=6071,freq=6.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.21373394 = fieldWeight in 6071, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
      0.46666667 = coord(7/15)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
    Series
    Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  9. Menzel, S.; Schnaitter, H.; Zinck, J.; Petras, V.; Neudecker, C.; Labusch, K.; Leitner, E.; Rehm, G.: Named Entity Linking mit Wikidata und GND : das Potenzial handkuratierter und strukturierter Datenquellen für die semantische Anreicherung von Volltexten (2021) 0.06
    0.060789894 = product of:
      0.15197474 = sum of:
        0.03280839 = weight(_text_:23 in 373) [ClassicSimilarity], result of:
          0.03280839 = score(doc=373,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.28000468 = fieldWeight in 373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
        0.03280839 = weight(_text_:23 in 373) [ClassicSimilarity], result of:
          0.03280839 = score(doc=373,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.28000468 = fieldWeight in 373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
        0.028054515 = weight(_text_:und in 373) [ClassicSimilarity], result of:
          0.028054515 = score(doc=373,freq=20.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.3871834 = fieldWeight in 373, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
        0.03280839 = weight(_text_:23 in 373) [ClassicSimilarity], result of:
          0.03280839 = score(doc=373,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.28000468 = fieldWeight in 373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
        0.007472136 = weight(_text_:in in 373) [ClassicSimilarity], result of:
          0.007472136 = score(doc=373,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.16802745 = fieldWeight in 373, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
        0.018022915 = weight(_text_:der in 373) [ClassicSimilarity], result of:
          0.018022915 = score(doc=373,freq=8.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.2467987 = fieldWeight in 373, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=373)
      0.4 = coord(6/15)
    
    Abstract
    Named Entities (benannte Entitäten) - wie Personen, Organisationen, Orte, Ereignisse und Werke - sind wichtige inhaltstragende Komponenten eines Dokuments und sind daher maßgeblich für eine gute inhaltliche Erschließung. Die Erkennung von Named Entities, deren Auszeichnung (Annotation) und Verfügbarmachung für die Suche sind wichtige Instrumente, um Anwendungen wie z. B. die inhaltliche oder semantische Suche in Texten, dokumentübergreifende Kontextualisierung oder das automatische Textzusammenfassen zu verbessern. Inhaltlich präzise und nachhaltig erschlossen werden die erkannten Named Entities eines Dokuments allerdings erst, wenn sie mit einer oder mehreren Quellen verknüpft werden (Grundprinzip von Linked Data, Berners-Lee 2006), die die Entität eindeutig identifizieren und gegenüber gleichlautenden Entitäten disambiguieren (vergleiche z. B. Berlin als Hauptstadt Deutschlands mit dem Komponisten Irving Berlin). Dazu wird die im Dokument erkannte Entität mit dem Entitätseintrag einer Normdatei oder einer anderen zuvor festgelegten Wissensbasis (z. B. Gazetteer für geografische Entitäten) verknüpft, gewöhnlich über den persistenten Identifikator der jeweiligen Wissensbasis oder Normdatei. Durch die Verknüpfung mit einer Normdatei erfolgt nicht nur die Disambiguierung und Identifikation der Entität, sondern es wird dadurch auch Interoperabilität zu anderen Systemen hergestellt, in denen die gleiche Normdatei benutzt wird, z. B. die Suche nach der Hauptstadt Berlin in verschiedenen Datenbanken bzw. Portalen. Die Entitätenverknüpfung (Named Entity Linking, NEL) hat zudem den Vorteil, dass die Normdateien oftmals Relationen zwischen Entitäten enthalten, sodass Dokumente, in denen Named Entities erkannt wurden, zusätzlich auch im Kontext einer größeren Netzwerkstruktur von Entitäten verortet und suchbar gemacht werden können
    Date
    23. 9.2021 16:09:53
    23. 9.2021 19:37:38
    Series
    Bibliotheks- und Informationspraxis; 70
    Source
    Qualität in der Inhaltserschließung. Hrsg.: M. Franke-Maier, u.a
  10. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.06
    0.056158274 = product of:
      0.16847482 = sum of:
        0.06057789 = product of:
          0.18173367 = sum of:
            0.18173367 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.18173367 = score(doc=306,freq=2.0), product of:
                0.27716497 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.032692216 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.032478645 = weight(_text_:23 in 306) [ClassicSimilarity], result of:
          0.032478645 = score(doc=306,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.032478645 = weight(_text_:23 in 306) [ClassicSimilarity], result of:
          0.032478645 = score(doc=306,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.032478645 = weight(_text_:23 in 306) [ClassicSimilarity], result of:
          0.032478645 = score(doc=306,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.010460991 = weight(_text_:in in 306) [ClassicSimilarity], result of:
          0.010460991 = score(doc=306,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.23523843 = fieldWeight in 306, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.33333334 = coord(5/15)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
    Date
    14. 7.2012 17:23:34
  11. Heel, F.: Abbildungen zwischen der Dewey-Dezimalklassifikation (DDC), der Regensburger Verbundklassifikation (RVK) und der Schlagwortnormdatei (SWD) für die Recherche in heterogen erschlossenen Datenbeständen : Möglichkeiten und Problembereiche (2007) 0.06
    0.055789478 = product of:
      0.13947369 = sum of:
        0.023199033 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4434,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.023199033 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4434,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.03319455 = weight(_text_:und in 4434) [ClassicSimilarity], result of:
          0.03319455 = score(doc=4434,freq=28.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.45812157 = fieldWeight in 4434, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.023199033 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.023199033 = score(doc=4434,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.008185315 = weight(_text_:in in 4434) [ClassicSimilarity], result of:
          0.008185315 = score(doc=4434,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18406484 = fieldWeight in 4434, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.028496731 = weight(_text_:der in 4434) [ClassicSimilarity], result of:
          0.028496731 = score(doc=4434,freq=20.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.390223 = fieldWeight in 4434, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
      0.4 = coord(6/15)
    
    Abstract
    Eine einheitliche Sacherschließung in Deutschland wird durch die Vielzahl an vorhandenen und verwendeten Erschließungssystemen, Universal-, Fachklassifikationen und Fachthesauri erschwert. Den Benutzern von Bibliothekskatalogen oder Datenbanken fällt es daher schwer, themenspezifische Recherchen in heterogen erschlossenen Datenbeständen durchzuführen. In diesem Fall müssen die Nutzer derzeit nämlich den Umgang mit mehreren Erschließungsinstrumenten erlernen und verschiedene Suchanfragen anwenden, um das gewünschte Rechercheergebnis datenbankübergreifend zu erreichen. Um dem Benutzer einen einheitlichen Zugang zu heterogen erschlossenen Datenbeständen zu gewährleisten und gleichzeitig auch den Arbeitsaufwand für die Bibliothekare zu reduzieren, ist die Erstellung eines so genannten "Integrierten Retrievals" sinnvoll. Durch die Verknüpfung der unterschiedlichen Sacherschließungssysteme mit Hilfe von Konkordanzen wird es dem Nutzer ermöglicht, mit einem ihm vertrauten Vokabular eine sachliche Recherche in unterschiedlich erschlossenen Datenbeständen durchzuführen, ohne die spezifischen Besonderheiten der verschiedenen Erschließungsinstrumente kennen zu müssen. In dieser Arbeit sind exemplarisch drei Abbildungen für den Fachbereich der Bibliotheks- und Informationswissenschaften zwischen den für Deutschland wichtigsten Sacherschließungssystemen Dewey-Dezimalklassifikation (DDC), Regensburger Verbundklassifikation (RVK) und Schlagwortnormdatei (SWD) erstellt worden. Die Ergebnisse dieser Arbeit sollen einen ersten Überblick über spezifische Problemfelder und Möglichkeiten der hier erstellten Konkordanzen DDC - RVK, SWD - DDC und SWD - RVK liefern, um damit die Erstellung eines zukünftigen Recherchetools (und gegebenenfalls einer Klassifizierungshilfe) voranzutreiben. Die erstellten Konkordanzen liegen der Arbeit als Anhang bei.
    Content
    Bachelorarbeit im Studiengang Bibliotheks- und Informationsmanagement, Fakultät Information und Kommunikation, Hochschule der Medien Stuttgart
    Date
    23.10.2007 19:23:28
    Imprint
    Stuttgart : Hochschule der Medien / Fakultät Information und Kommunikation
  12. Jahns, Y.: Sacherschließung - zeitgemäß und zukunftsfähig (2010) 0.05
    0.05454023 = product of:
      0.1636207 = sum of:
        0.046398066 = weight(_text_:23 in 3278) [ClassicSimilarity], result of:
          0.046398066 = score(doc=3278,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.046398066 = weight(_text_:23 in 3278) [ClassicSimilarity], result of:
          0.046398066 = score(doc=3278,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.017743232 = weight(_text_:und in 3278) [ClassicSimilarity], result of:
          0.017743232 = score(doc=3278,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.24487628 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.046398066 = weight(_text_:23 in 3278) [ClassicSimilarity], result of:
          0.046398066 = score(doc=3278,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.0066832816 = weight(_text_:in in 3278) [ClassicSimilarity], result of:
          0.0066832816 = score(doc=3278,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.15028831 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
      0.33333334 = coord(5/15)
    
    Abstract
    Berichtsartikel zu den Vorträgen des Bibliothekskongresses 2010 in Leipzig.
    Date
    16. 5.2010 17:23:40
  13. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.05
    0.054470085 = product of:
      0.16341025 = sum of:
        0.046398066 = weight(_text_:23 in 1287) [ClassicSimilarity], result of:
          0.046398066 = score(doc=1287,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 1287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=1287)
        0.046398066 = weight(_text_:23 in 1287) [ClassicSimilarity], result of:
          0.046398066 = score(doc=1287,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 1287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=1287)
        0.046398066 = weight(_text_:23 in 1287) [ClassicSimilarity], result of:
          0.046398066 = score(doc=1287,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.3959864 = fieldWeight in 1287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=1287)
        0.009451588 = weight(_text_:in in 1287) [ClassicSimilarity], result of:
          0.009451588 = score(doc=1287,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21253976 = fieldWeight in 1287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1287)
        0.01476448 = product of:
          0.04429344 = sum of:
            0.04429344 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.04429344 = score(doc=1287,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.33333334 = coord(1/3)
      0.33333334 = coord(5/15)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  14. Niggemann, E.: Wer suchet, der findet? : Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek (2006) 0.05
    0.05363068 = product of:
      0.16089204 = sum of:
        0.032478645 = weight(_text_:23 in 5812) [ClassicSimilarity], result of:
          0.032478645 = score(doc=5812,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.032478645 = weight(_text_:23 in 5812) [ClassicSimilarity], result of:
          0.032478645 = score(doc=5812,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.027772553 = weight(_text_:und in 5812) [ClassicSimilarity], result of:
          0.027772553 = score(doc=5812,freq=10.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.38329202 = fieldWeight in 5812, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.032478645 = weight(_text_:23 in 5812) [ClassicSimilarity], result of:
          0.032478645 = score(doc=5812,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.27719048 = fieldWeight in 5812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
        0.03568355 = weight(_text_:der in 5812) [ClassicSimilarity], result of:
          0.03568355 = score(doc=5812,freq=16.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.4886365 = fieldWeight in 5812, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5812)
      0.33333334 = coord(5/15)
    
    Abstract
    Elektronische Bibliothekskataloge und Bibliografien haben ihr Monopol bei der Suche nach Büchern, Aufsätzen, musikalischen Werken u. a. verloren. Globale Suchmaschinen sind starke Konkurrenten, und Bibliotheken müssen heute so planen, dass ihre Dienstleistungen auch morgen noch interessant sind. Die Deutsche Bibliothek (DDB) wird ihre traditionelle Katalogrecherche zu einem globalen, netzbasierten Informationssystem erweitern, das die Vorteile der neutralen, qualitätsbasierten Katalogsuche mit den Vorteilen moderner Suchmaschinen zu verbinden sucht. Dieser Beitrag beschäftigt sich mit der Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek. Weitere Entwicklungsstränge sollen nur kurz im Ausblick angerissen werden.
    Date
    13.10.2006 9:35:23
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  15. Mayr, P.: Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken (2009) 0.04
    0.044816084 = product of:
      0.13444825 = sum of:
        0.06420939 = weight(_text_:monographien in 4302) [ClassicSimilarity], result of:
          0.06420939 = score(doc=4302,freq=2.0), product of:
            0.217941 = queryWeight, product of:
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.032692216 = queryNorm
            0.2946182 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.666449 = idf(docFreq=152, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
        0.025589654 = weight(_text_:und in 4302) [ClassicSimilarity], result of:
          0.025589654 = score(doc=4302,freq=26.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.3531656 = fieldWeight in 4302, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
        0.013698216 = weight(_text_:zur in 4302) [ClassicSimilarity], result of:
          0.013698216 = score(doc=4302,freq=2.0), product of:
            0.100663416 = queryWeight, product of:
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.032692216 = queryNorm
            0.13607939 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.079125 = idf(docFreq=5528, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
        0.0059777093 = weight(_text_:in in 4302) [ClassicSimilarity], result of:
          0.0059777093 = score(doc=4302,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.13442196 = fieldWeight in 4302, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
        0.024973284 = weight(_text_:der in 4302) [ClassicSimilarity], result of:
          0.024973284 = score(doc=4302,freq=24.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.34197432 = fieldWeight in 4302, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
      0.33333334 = coord(5/15)
    
    Abstract
    Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
    Footnote
    Dissertation zur Erlangung des akademischen Grades Doctor philosophiae (Dr. phil.) eingereicht an der Philosophischen Fakultät I
    Imprint
    Berlin : Humboldt-Universität zu Berlin / Institut für Bibliotheks- und Informationswissenschaft
  16. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.04
    0.04447628 = product of:
      0.11119071 = sum of:
        0.023199033 = weight(_text_:23 in 3628) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3628,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.023199033 = weight(_text_:23 in 3628) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3628,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.028423464 = weight(_text_:software in 3628) [ClassicSimilarity], result of:
          0.028423464 = score(doc=3628,freq=2.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.21915624 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.023199033 = weight(_text_:23 in 3628) [ClassicSimilarity], result of:
          0.023199033 = score(doc=3628,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.1979932 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.0057878923 = weight(_text_:in in 3628) [ClassicSimilarity], result of:
          0.0057878923 = score(doc=3628,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1301535 = fieldWeight in 3628, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.00738224 = product of:
          0.02214672 = sum of:
            0.02214672 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.02214672 = score(doc=3628,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.33333334 = coord(1/3)
      0.4 = coord(6/15)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  17. Bellotto, A.; Bekesi, J.: Enriching metadata for a university repository by modelling and infrastructure : a new vocabulary server for Phaidra (2019) 0.04
    0.044258863 = product of:
      0.11064716 = sum of:
        0.02783884 = weight(_text_:23 in 5693) [ClassicSimilarity], result of:
          0.02783884 = score(doc=5693,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 5693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
        0.02783884 = weight(_text_:23 in 5693) [ClassicSimilarity], result of:
          0.02783884 = score(doc=5693,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 5693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
        0.010645939 = weight(_text_:und in 5693) [ClassicSimilarity], result of:
          0.010645939 = score(doc=5693,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.14692576 = fieldWeight in 5693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
        0.02783884 = weight(_text_:23 in 5693) [ClassicSimilarity], result of:
          0.02783884 = score(doc=5693,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 5693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
        0.005670953 = weight(_text_:in in 5693) [ClassicSimilarity], result of:
          0.005670953 = score(doc=5693,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.12752387 = fieldWeight in 5693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
        0.010813748 = weight(_text_:der in 5693) [ClassicSimilarity], result of:
          0.010813748 = score(doc=5693,freq=2.0), product of:
            0.073026784 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.032692216 = queryNorm
            0.14807922 = fieldWeight in 5693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=5693)
      0.4 = coord(6/15)
    
    Abstract
    This paper illustrates an initial step towards the 'semantic enrichment' of University of Vienna's Phaidra repository as one of the valuable and up-to-date strategies able to enhance its role and usage. Firstly, a technical report points out the choice made in a local context, i.e. the deployment of the vocabulary server iQvoc instead of the formerly used SKOSMOS, explaining design decisions behind the current tool and additional features that the implementation required. Afterwards, some modelling characteristics of the local LOD controlled vocabulary are described according to SKOS documentation and best practices, highlighting which approaches can be pursued for rendering a LOD KOS available in the Web as well as issues that can be possibly encountered.
    Date
    5. 6.2016 17:23:26
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 72(2019) H.2-4, S.446-459
  18. Wake, S.; Nicholson, D.: HILT : subject access across domains (2001) 0.04
    0.038974375 = product of:
      0.19487187 = sum of:
        0.06495729 = weight(_text_:23 in 502) [ClassicSimilarity], result of:
          0.06495729 = score(doc=502,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.55438095 = fieldWeight in 502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=502)
        0.06495729 = weight(_text_:23 in 502) [ClassicSimilarity], result of:
          0.06495729 = score(doc=502,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.55438095 = fieldWeight in 502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=502)
        0.06495729 = weight(_text_:23 in 502) [ClassicSimilarity], result of:
          0.06495729 = score(doc=502,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.55438095 = fieldWeight in 502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=502)
      0.2 = coord(3/15)
    
    Source
    SCONUL newsletter. 2001, no.23, S.14-18
  19. Hoffmann, P.; Médini and , L.; Ghodous, P.: Using context to improve semantic interoperability (2006) 0.04
    0.0389062 = product of:
      0.14589825 = sum of:
        0.045931738 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.045931738 = score(doc=4434,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 4434, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
        0.045931738 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.045931738 = score(doc=4434,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 4434, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
        0.045931738 = weight(_text_:23 in 4434) [ClassicSimilarity], result of:
          0.045931738 = score(doc=4434,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 4434, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
        0.008103048 = weight(_text_:in in 4434) [ClassicSimilarity], result of:
          0.008103048 = score(doc=4434,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1822149 = fieldWeight in 4434, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
      0.26666668 = coord(4/15)
    
    Abstract
    This paper presents an approach to enhance interoperability between heterogeneous ontologies. It consists in adapting the ranking of concepts to the final users and their work context. The computations are based on an upper domain ontology, a task hierarchy and a user profile. As prerequisites, OWL ontologie have to be given, and an articulation ontology has to be built.
    Date
    23. 6.2011 13:23:58
    Series
    Frontiers in artificial intelligence and applications; vol 143
    Source
    Leading the Web in concurrent engineering: next generation concurrent engineering. Proceeding of the 2006 ISPE Conference on Concurrent Engineering. Edited by Parisa Ghodous, Rose Dieng-Kuntz, Geilson Loureiro
  20. Wicaksana, I.W.S.; Wahyudi, B.: Comparison Latent Semantic and WordNet approach for semantic similarity calculation (2011) 0.04
    0.03850969 = product of:
      0.14441133 = sum of:
        0.045931738 = weight(_text_:23 in 689) [ClassicSimilarity], result of:
          0.045931738 = score(doc=689,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
        0.045931738 = weight(_text_:23 in 689) [ClassicSimilarity], result of:
          0.045931738 = score(doc=689,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
        0.045931738 = weight(_text_:23 in 689) [ClassicSimilarity], result of:
          0.045931738 = score(doc=689,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.39200652 = fieldWeight in 689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
        0.0066161114 = weight(_text_:in in 689) [ClassicSimilarity], result of:
          0.0066161114 = score(doc=689,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.14877784 = fieldWeight in 689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
      0.26666668 = coord(4/15)
    
    Abstract
    Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this paper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
    Date
    23. 3.2013 13:23:48

Years

Languages

  • e 224
  • d 75
  • no 1
  • pt 1
  • More… Less…

Types

  • a 206
  • el 94
  • m 18
  • x 11
  • r 8
  • s 7
  • n 2
  • p 2
  • More… Less…

Subjects