Search (90 results, page 1 of 5)

  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2010 TO 2020}
  1. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.02
    0.024250532 = product of:
      0.060626328 = sum of:
        0.014781064 = product of:
          0.044343192 = sum of:
            0.044343192 = weight(_text_:f in 4792) [ClassicSimilarity], result of:
              0.044343192 = score(doc=4792,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.3082599 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.33333334 = coord(1/3)
        0.045845263 = product of:
          0.06876789 = sum of:
            0.034539293 = weight(_text_:29 in 4792) [ClassicSimilarity], result of:
              0.034539293 = score(doc=4792,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.27205724 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
            0.034228593 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.034228593 = score(doc=4792,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.6666667 = coord(2/3)
      0.4 = coord(2/5)
    
    Date
    2. 3.2013 12:29:05
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  2. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.02
    0.019622812 = product of:
      0.04905703 = sum of:
        0.0327577 = weight(_text_:den in 4523) [ClassicSimilarity], result of:
          0.0327577 = score(doc=4523,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.31667316 = fieldWeight in 4523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.078125 = fieldNorm(doc=4523)
        0.016299332 = product of:
          0.048897993 = sum of:
            0.048897993 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
              0.048897993 = score(doc=4523,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.38690117 = fieldWeight in 4523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4523)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Content
    Vgl. auch den Beitrag: Sokol, J.: Spielend lernen. In: Spektrum der Wissenschaft. 2018, H.11, S.72-76.
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
  3. Hoppe, T.: Semantische Filterung : ein Werkzeug zur Steigerung der Effizienz im Wissensmanagement (2013) 0.02
    0.015745595 = product of:
      0.039363988 = sum of:
        0.02620616 = weight(_text_:den in 2245) [ClassicSimilarity], result of:
          0.02620616 = score(doc=2245,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.25333852 = fieldWeight in 2245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=2245)
        0.013157828 = product of:
          0.03947348 = sum of:
            0.03947348 = weight(_text_:29 in 2245) [ClassicSimilarity], result of:
              0.03947348 = score(doc=2245,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.31092256 = fieldWeight in 2245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2245)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Dieser Artikel adressiert einen Randbereich des Wissensmanagements: die Schnittstelle zwischen Unternehmens-externen Informationen im Internet und den Leistungsprozessen eines Unternehmens. Diese Schnittstelle ist besonders für Unternehmen von Interesse, deren Leistungsprozesse von externen Informationen abhängen und die auf diese Prozesse angewiesen sind. Wir zeigen an zwei Fallbeispielen, dass die inhaltliche Filterung von Informationen beim Eintritt ins Unternehmen ein wichtiges Werkzeug darstellt, um daran anschließende Wissens- und Informationsmanagementprozesse effizient zu gestalten.
    Date
    29. 9.2015 18:56:44
  4. Hohmann, G.: ¬Die Anwendung des CIDOC-CRM für die semantische Wissensrepräsentation in den Kulturwissenschaften (2010) 0.02
    0.015030173 = product of:
      0.03757543 = sum of:
        0.02779583 = weight(_text_:den in 4011) [ClassicSimilarity], result of:
          0.02779583 = score(doc=4011,freq=4.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.26870608 = fieldWeight in 4011, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.046875 = fieldNorm(doc=4011)
        0.009779599 = product of:
          0.029338794 = sum of:
            0.029338794 = weight(_text_:22 in 4011) [ClassicSimilarity], result of:
              0.029338794 = score(doc=4011,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23214069 = fieldWeight in 4011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4011)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Das CIDOC Conceptual Reference Model (CRM) ist eine Ontologie für den Bereich des Kulturellen Erbes, die als ISO 21127 standardisiert ist. Inzwischen liegen auch OWL-DL-Implementationen des CRM vor, die ihren Einsatz auch im Semantic Web ermöglicht. OWL-DL ist eine entscheidbare Untermenge der Web Ontology Language, die vom W3C spezifiziert wurde. Lokale Anwendungsontologien, die ebenfalls in OWL-DL modelliert werden, können über Subklassenbeziehungen mit dem CRM als Referenzontologie verbunden werden. Dadurch wird es automatischen Prozessen ermöglicht, autonom heterogene Daten semantisch zu validieren, zueinander in Bezug zu setzen und Anfragen über verschiedene Datenbestände innerhalb der Wissensdomäne zu verarbeiten und zu beantworten.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  5. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.01
    0.014966056 = product of:
      0.03741514 = sum of:
        0.021115808 = product of:
          0.06334742 = sum of:
            0.06334742 = weight(_text_:f in 5576) [ClassicSimilarity], result of:
              0.06334742 = score(doc=5576,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.4403713 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.33333334 = coord(1/3)
        0.016299332 = product of:
          0.048897993 = sum of:
            0.048897993 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.048897993 = score(doc=5576,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    13.12.2017 14:17:22
  6. Boteram, F.: Typisierung semantischer Relationen in integrierten Systemen der Wissensorganisation (2013) 0.01
    0.013488437 = product of:
      0.033721093 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 919) [ClassicSimilarity], result of:
              0.03167371 = score(doc=919,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=919)
          0.33333334 = coord(1/3)
        0.02316319 = weight(_text_:den in 919) [ClassicSimilarity], result of:
          0.02316319 = score(doc=919,freq=4.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.22392172 = fieldWeight in 919, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0390625 = fieldNorm(doc=919)
      0.4 = coord(2/5)
    
    Abstract
    Die, differenzierte Typisierung semantischer Relationen hinsichtlich ihrer bedeutungstragenden inhaltlichen und formallogischen Eigenschaften in Systemen der Wissensorganisation ist eine Voraussetzung für leistungsstarke und benutzerfreundliche Modelle des information Retrieval und der Wissensexploration. Systeme, die mehrere Dokumentationssprachen miteinander verknüpfen und funktional integrieren, erfordern besondere Ansätze für die Typisierung der verwendeten oder benötigten Relationen. Aufbauend auf vorangegangenen Überlegungen zu Modellen der semantischen Interoperabilität in verteilten Systemen, welche durch ein zentrales Kernsystem miteinander verbunden und so in den übergeordneten Funktionszusammenhang der Wissensorganisation gestellt werden, werden differenzierte und funktionale Strategien zur Typisierung und stratifizierten Definition der unterschiedlichen Relationen in diesem System entwickelt. Um die von fortschrittlichen Retrievalparadigmen erforderten Funktionalitäten im Kontext vernetzter Systeme zur Wissensorganisation unterstützen zu können, werden die formallogischen, typologischen und strukturellen Eigenschaften sowie der eigentliche semantische Gehalt aller Relationstypen definiert, die zur Darstellung von Begriffsbeziehungen verwendet werden. Um die Vielzahl unterschiedlicher aber im Funktionszusammenhang des Gesamtsystems auf einander bezogenen Relationstypen präzise und effizient ordnen zu können, wird eine mehrfach gegliederte Struktur benötigt, welche die angestrebten Inventare in einer Ear den Nutzer übersichtlichen und intuitiv handhabbaren Form präsentieren und somit für eine Verwendung in explorativen Systemen vorhalten kann.
  7. Ricci, F.; Schneider, R.: ¬Die Verwendung von SKOS-Daten zur semantischen Suchfragenerweiterung im Kontext des individualisierbaren Informationsportals RODIN (2010) 0.01
    0.01245661 = product of:
      0.031141523 = sum of:
        0.008446323 = product of:
          0.025338966 = sum of:
            0.025338966 = weight(_text_:f in 4261) [ClassicSimilarity], result of:
              0.025338966 = score(doc=4261,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.17614852 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4261)
          0.33333334 = coord(1/3)
        0.0226952 = weight(_text_:den in 4261) [ClassicSimilarity], result of:
          0.0226952 = score(doc=4261,freq=6.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.21939759 = fieldWeight in 4261, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.03125 = fieldNorm(doc=4261)
      0.4 = coord(2/5)
    
    Content
    "Im Projekt RODIN (Roue d'information) wird die Realisierung einer alternativen Portalidee im Rahmen der E-lib.ch-Initiative (www.e-lib.ch) angestrebt. Dahinter verbirgt sich die Idee eines personalisierbaren Informationsportals zur Aggregation heterogener Datenquellen unter Verwendung von SemanticWeb-Technologie. Das allgemeine wissenschaftliche Interesse von RODIN besteht darin, zu überprüfen, inwieweit bibliografische Ontologien als Bestandteil des SemanticWeb für die Informationssuche gewinnbringend eingesetzt werden können. Den Benutzern werden hierbei unterschiedliche Funktionalitäten zur Verfügung gestellt. So können sie zunächst aus unterschiedlichen Informationsquellen jene auswählen, die für ihre Recherche von Relevanz sind und diese in Form von Widgets auf der Startseite des Informationsportals zusammenstellen. Konkret handelt es sich hierbei um Informationsquellen, die im Kontext von E-lib.ch bereitgestellt werden (bspw. Swissbib.ch, Rero-Doc) sowie um allgemeine Webressourcen (etwa Delicious oder Google-Books). Anschließend besteht die Möglichkeit, simultan über alle spezifizierten Quellen eine parallele Suche anzustoßen und - nach Beendigung dieser Metasuche - die Ergebnisse zu verfeinern.
    Der Ausgangspunkt für die Suchverfeinerung ist ein Ergebnis aus der Treffermenge, für welches ein Ontologie-Mapping durchgeführt wird, d.h. dass ausgehend von den Daten und Metadaten des Suchergebnisses der semantische Kontext für dieses Dokument innerhalb einer Ontologie ermittelt wird. Bei diesen Ontologien handelt es sich um in SKOS-Daten überführte Thesauri und Taxonomien, die aus einem bibliothekswissenschaftlichen Umfeld stammen. Durch Ermittlung des ontologischen Kontexts stehen eine Reihe von Termen in Form von Synonymen, Hypernymen und Hyponymen zur Verfügung, die es dem Benutzer ermöglichen, seine Ergebnismenge gezielt einzuschränken, zu verallgemeinern oder auf ähnliche Begriffe auszuweiten. Nach der Bestimmung des weiteren Suchkontexts wird dann in allen vom Benutzer bereits ausgewählten Widgets eine neue Suche angestoßen und das zur Suchverfeinerung ausgewählte Dokument in seiner semantischen Ausrichtung zu den übrigen Informationsquellen kontextualisiert."
  8. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.01
    0.011464337 = product of:
      0.057321683 = sum of:
        0.057321683 = product of:
          0.17196505 = sum of:
            0.17196505 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.17196505 = score(doc=400,freq=2.0), product of:
                0.30597782 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036090754 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  9. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.010604188 = product of:
      0.02651047 = sum of:
        0.018286826 = product of:
          0.054860476 = sum of:
            0.054860476 = weight(_text_:f in 4705) [ClassicSimilarity], result of:
              0.054860476 = score(doc=4705,freq=6.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.38137275 = fieldWeight in 4705, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.33333334 = coord(1/3)
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 4705) [ClassicSimilarity], result of:
              0.024670927 = score(doc=4705,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
    Date
    29. 7.2011 14:44:56
    Source
    The Semantic Web - ISWC 2010. 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. Eds.: Peter F. Patel-Schneider et al
  10. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.01
    0.009840997 = product of:
      0.024602491 = sum of:
        0.01637885 = weight(_text_:den in 3437) [ClassicSimilarity], result of:
          0.01637885 = score(doc=3437,freq=2.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.15833658 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 3437) [ClassicSimilarity], result of:
              0.024670927 = score(doc=3437,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 3437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    16.11.2017 13:29:12
  11. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.01
    0.009232319 = product of:
      0.023080796 = sum of:
        0.01493113 = product of:
          0.04479339 = sum of:
            0.04479339 = weight(_text_:f in 2589) [ClassicSimilarity], result of:
              0.04479339 = score(doc=2589,freq=4.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.31138954 = fieldWeight in 2589, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.33333334 = coord(1/3)
        0.008149666 = product of:
          0.024448996 = sum of:
            0.024448996 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
              0.024448996 = score(doc=2589,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19345059 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Ontologies are used to formally describe the concepts within a domain in a machine-understandable way. Matching of heterogeneous ontologies is often essential for many applications like semantic annotation, query answering or ontology integration. Some ontologies may include a large number of entities which make the ontology matching process very complex in terms of the search space and execution time requirements. The purpose of this paper is to present a technique for finding degree of similarity between ontologies that trims down the search space by eliminating the ontology concepts that have less likelihood of being matched. Design/methodology/approach Algorithms are written for finding key concepts, concept matching and relationship matching. WordNet is used for solving synonym problems during the matching process. The technique is evaluated using the reference alignments between ontologies from ontology alignment evaluation initiative benchmark in terms of degree of similarity, Pearson's correlation coefficient and IR measures precision, recall and F-measure. Findings Positive correlation between the degree of similarity and degree of similarity (reference alignment) and computed values of precision, recall and F-measure showed that if only key concepts of ontologies are compared, a time and search space efficient ontology matching system can be developed. Originality/value On the basis of the present novel approach for ontology matching, it is concluded that using key concepts for ontology matching gives comparable results in reduced time and space.
    Date
    20. 1.2015 18:30:22
  12. Zeh, T.: Ontologien in den Informationswissenschaften (2011) 0.01
    0.009078081 = product of:
      0.0453904 = sum of:
        0.0453904 = weight(_text_:den in 4981) [ClassicSimilarity], result of:
          0.0453904 = score(doc=4981,freq=6.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.43879518 = fieldWeight in 4981, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=4981)
      0.2 = coord(1/5)
    
    Content
    "Seit etwa zwei Jahren gibt es im webbasieren sozialen Netzwerk XING (vormals openBC) die Arbeitsgruppe Ontologien in den Informationswissenschaften: Theorien, Methodologien, Technologien und Anwendungen. Die von Anatol Reibold initiierte Arbeitsgruppe mit inzwischen mehr als 800 Mitgliedern will vor allem das Thema Ontologien in den Informationswissenschaften voranbringen, ein Netzwerk von Ontologen autbauen und Wissen über das Thema austauschen. Im Forum werden grundsätzliche wie auch aktuelle Themen diskutiert, über neue Entwicklungen berichtet und Tipps zur Literatur gegeben."
  13. Sartori, F.; Grazioli, L.: Metadata guiding kowledge engineering : a practical approach (2014) 0.01
    0.009015142 = product of:
      0.022537854 = sum of:
        0.012669483 = product of:
          0.038008448 = sum of:
            0.038008448 = weight(_text_:f in 1572) [ClassicSimilarity], result of:
              0.038008448 = score(doc=1572,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.26422277 = fieldWeight in 1572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.33333334 = coord(1/3)
        0.00986837 = product of:
          0.029605111 = sum of:
            0.029605111 = weight(_text_:29 in 1572) [ClassicSimilarity], result of:
              0.029605111 = score(doc=1572,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23319192 = fieldWeight in 1572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1572)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  14. Amarger, F.; Chanet, J.-P.; Haemmerlé, O.; Hernandez, N.; Roussey, C.: SKOS sources transformations for ontology engineering : agronomical taxonomy use case (2014) 0.01
    0.009015142 = product of:
      0.022537854 = sum of:
        0.012669483 = product of:
          0.038008448 = sum of:
            0.038008448 = weight(_text_:f in 1593) [ClassicSimilarity], result of:
              0.038008448 = score(doc=1593,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.26422277 = fieldWeight in 1593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1593)
          0.33333334 = coord(1/3)
        0.00986837 = product of:
          0.029605111 = sum of:
            0.029605111 = weight(_text_:29 in 1593) [ClassicSimilarity], result of:
              0.029605111 = score(doc=1593,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23319192 = fieldWeight in 1593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1593)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  15. Ibekwe-SanJuan, F.: Semantic metadata annotation : tagging Medline abstracts for enhanced information access (2010) 0.01
    0.00848335 = product of:
      0.021208376 = sum of:
        0.014629461 = product of:
          0.043888383 = sum of:
            0.043888383 = weight(_text_:f in 3949) [ClassicSimilarity], result of:
              0.043888383 = score(doc=3949,freq=6.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.3050982 = fieldWeight in 3949, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3949)
          0.33333334 = coord(1/3)
        0.006578914 = product of:
          0.01973674 = sum of:
            0.01973674 = weight(_text_:29 in 3949) [ClassicSimilarity], result of:
              0.01973674 = score(doc=3949,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.15546128 = fieldWeight in 3949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3949)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The object of this study is to develop methods for automatically annotating the argumentative role of sentences in scientific abstracts. Working from Medline abstracts, sentences were classified into four major argumentative roles: objective, method, result, and conclusion. The idea is that, if the role of each sentence can be marked up, then these metadata can be used during information retrieval to seek particular types of information such as novelty, conclusions, methodologies, aims/goals of a scientific piece of work. Design/methodology/approach - Two approaches were tested: linguistic cues and positional heuristics. Linguistic cues are lexico-syntactic patterns modelled as regular expressions implemented in a linguistic parser. Positional heuristics make use of the relative position of a sentence in the abstract to deduce its argumentative class. Findings - The experiments showed that positional heuristics attained a much higher degree of accuracy on Medline abstracts with an F-score of 64 per cent, whereas the linguistic cues only attained an F-score of 12 per cent. This is mostly because sentences from different argumentative roles are not always announced by surface linguistic cues. Research limitations/implications - A limitation to the study was the inability to test other methods to perform this task such as machine learning techniques which have been reported to perform better on Medline abstracts. Also, to compare the results of the study with earlier studies using Medline abstracts, the different argumentative roles present in Medline had to be mapped on to four major argumentative roles. This may have favourably biased the performance of the sentence classification by positional heuristics. Originality/value - To the best of one's knowledge, this study presents the first instance of evaluating linguistic cues and positional heuristics on the same corpus.
    Date
    29. 8.2010 12:21:49
  16. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.007859188 = product of:
      0.039295938 = sum of:
        0.039295938 = product of:
          0.058943905 = sum of:
            0.029605111 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.029605111 = score(doc=4649,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.029338794 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.029338794 = score(doc=4649,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  17. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.01
    0.007642892 = product of:
      0.03821446 = sum of:
        0.03821446 = product of:
          0.11464337 = sum of:
            0.11464337 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.11464337 = score(doc=5820,freq=2.0), product of:
                0.30597782 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.036090754 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  18. Ejei, F.; Beheshti, M.S.H.; Rajabi, T.; Ejehi, Z.: Enriching semantic relations of basic sciences ontology (2017) 0.01
    0.007512619 = product of:
      0.018781547 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 3844) [ClassicSimilarity], result of:
              0.03167371 = score(doc=3844,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 3844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3844)
          0.33333334 = coord(1/3)
        0.008223643 = product of:
          0.024670927 = sum of:
            0.024670927 = weight(_text_:29 in 3844) [ClassicSimilarity], result of:
              0.024670927 = score(doc=3844,freq=2.0), product of:
                0.12695599 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19432661 = fieldWeight in 3844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3844)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    29. 9.2017 18:39:48
  19. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.007483028 = product of:
      0.01870757 = sum of:
        0.010557904 = product of:
          0.03167371 = sum of:
            0.03167371 = weight(_text_:f in 4553) [ClassicSimilarity], result of:
              0.03167371 = score(doc=4553,freq=2.0), product of:
                0.14385001 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.036090754 = queryNorm
                0.22018565 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
        0.008149666 = product of:
          0.024448996 = sum of:
            0.024448996 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.024448996 = score(doc=4553,freq=2.0), product of:
                0.12638368 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036090754 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    16.11.2018 14:22:01
  20. Gödert, W.: ¬Ein Ontologie basiertes Modell für Indexierung und Retrieval (2014) 0.01
    0.0074122213 = product of:
      0.037061106 = sum of:
        0.037061106 = weight(_text_:den in 1266) [ClassicSimilarity], result of:
          0.037061106 = score(doc=1266,freq=4.0), product of:
            0.10344325 = queryWeight, product of:
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.036090754 = queryNorm
            0.35827476 = fieldWeight in 1266, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.866198 = idf(docFreq=6840, maxDocs=44218)
              0.0625 = fieldNorm(doc=1266)
      0.2 = coord(1/5)
    
    Abstract
    In diesem Beitrag wird ausgehend von einem ungelösten Problem der Informationserschließung ein Modell vorgestellt, das die Methoden und Erfahrungen zur inhaltlichen Dokumenterschließung mittels kognitiv zu interpretierender Dokumentationssprachen mit den Möglichkeiten formaler Wissensrepräsentation verbindet. Die Kernkomponente des Modells besteht aus der Nutzung von Inferenzen entlang der Pfade typisierter Relationen zwischen den in Facetten geordneten Entitäten innerhalb einer Wissensrepräsentation zur Bestimmung von Treffermengen im Rahmen von Retrievalprozessen. Es werden die möglichen Konsequenzen für das Indexieren und Retrieval diskutiert.

Authors

Languages

  • e 62
  • d 26
  • f 1
  • sp 1
  • More… Less…

Types

  • a 70
  • el 14
  • m 8
  • x 8
  • s 4
  • p 1
  • r 1
  • More… Less…

Subjects