Search (549 results, page 1 of 28)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.046342835 = sum of:
      0.023563074 = product of:
        0.1649415 = sum of:
          0.1649415 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.1649415 = score(doc=400,freq=2.0), product of:
              0.2934808 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.034616705 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.14285715 = coord(1/7)
      0.02277976 = product of:
        0.03416964 = sum of:
          0.02316926 = weight(_text_:j in 400) [ClassicSimilarity], result of:
            0.02316926 = score(doc=400,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
          0.011000379 = weight(_text_:a in 400) [ClassicSimilarity], result of:
            0.011000379 = score(doc=400,freq=26.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.27559727 = fieldWeight in 400, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Type
    a
  2. Nielsen, M.: Neuronale Netze : Alpha Go - Computer lernen Intuition (2018) 0.05
    0.04530061 = product of:
      0.09060122 = sum of:
        0.09060122 = sum of:
          0.03861543 = weight(_text_:j in 4523) [ClassicSimilarity], result of:
            0.03861543 = score(doc=4523,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.35106707 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.0050849267 = weight(_text_:a in 4523) [ClassicSimilarity], result of:
            0.0050849267 = score(doc=4523,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.12739488 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
          0.046900857 = weight(_text_:22 in 4523) [ClassicSimilarity], result of:
            0.046900857 = score(doc=4523,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.38690117 = fieldWeight in 4523, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4523)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch den Beitrag: Sokol, J.: Spielend lernen. In: Spektrum der Wissenschaft. 2018, H.11, S.72-76.
    Source
    Spektrum der Wissenschaft. 2018, H.1, S.22-27
    Type
    a
  3. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.03
    0.031710427 = product of:
      0.063420855 = sum of:
        0.063420855 = sum of:
          0.027030803 = weight(_text_:j in 4792) [ClassicSimilarity], result of:
            0.027030803 = score(doc=4792,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.24574696 = fieldWeight in 4792, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4792)
          0.003559449 = weight(_text_:a in 4792) [ClassicSimilarity], result of:
            0.003559449 = score(doc=4792,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.089176424 = fieldWeight in 4792, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4792)
          0.0328306 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
            0.0328306 = score(doc=4792,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.2708308 = fieldWeight in 4792, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4792)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  4. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.03
    0.028705843 = product of:
      0.057411686 = sum of:
        0.057411686 = sum of:
          0.02316926 = weight(_text_:j in 2230) [ClassicSimilarity], result of:
            0.02316926 = score(doc=2230,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
          0.006101913 = weight(_text_:a in 2230) [ClassicSimilarity], result of:
            0.006101913 = score(doc=2230,freq=8.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.15287387 = fieldWeight in 2230, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
          0.028140513 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
            0.028140513 = score(doc=2230,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.23214069 = fieldWeight in 2230, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2230)
      0.5 = coord(1/2)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
    Type
    a
  5. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.03
    0.027812239 = product of:
      0.055624478 = sum of:
        0.055624478 = sum of:
          0.02316926 = weight(_text_:j in 987) [ClassicSimilarity], result of:
            0.02316926 = score(doc=987,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.004314704 = weight(_text_:a in 987) [ClassicSimilarity], result of:
            0.004314704 = score(doc=987,freq=4.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.10809815 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.028140513 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.028140513 = score(doc=987,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
  6. Hohmann, G.: ¬Die Anwendung des CIDOC-CRM für die semantische Wissensrepräsentation in den Kulturwissenschaften (2010) 0.03
    0.027180366 = product of:
      0.054360732 = sum of:
        0.054360732 = sum of:
          0.02316926 = weight(_text_:j in 4011) [ClassicSimilarity], result of:
            0.02316926 = score(doc=4011,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 4011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=4011)
          0.0030509564 = weight(_text_:a in 4011) [ClassicSimilarity], result of:
            0.0030509564 = score(doc=4011,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.07643694 = fieldWeight in 4011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4011)
          0.028140513 = weight(_text_:22 in 4011) [ClassicSimilarity], result of:
            0.028140513 = score(doc=4011,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.23214069 = fieldWeight in 4011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4011)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  7. Semenova, E.: Ontologie als Begriffssystem : Theoretische Überlegungen und ihre praktische Umsetzung bei der Entwicklung einer Ontologie der Wissenschaftsdisziplinen (2010) 0.03
    0.027180366 = product of:
      0.054360732 = sum of:
        0.054360732 = sum of:
          0.02316926 = weight(_text_:j in 4095) [ClassicSimilarity], result of:
            0.02316926 = score(doc=4095,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 4095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=4095)
          0.0030509564 = weight(_text_:a in 4095) [ClassicSimilarity], result of:
            0.0030509564 = score(doc=4095,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.07643694 = fieldWeight in 4095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4095)
          0.028140513 = weight(_text_:22 in 4095) [ClassicSimilarity], result of:
            0.028140513 = score(doc=4095,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.23214069 = fieldWeight in 4095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4095)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  8. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.03
    0.025399059 = product of:
      0.050798118 = sum of:
        0.050798118 = sum of:
          0.019307716 = weight(_text_:j in 3403) [ClassicSimilarity], result of:
            0.019307716 = score(doc=3403,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.17553353 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.008039976 = weight(_text_:a in 3403) [ClassicSimilarity], result of:
            0.008039976 = score(doc=3403,freq=20.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.20142901 = fieldWeight in 3403, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
          0.023450429 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
            0.023450429 = score(doc=3403,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.19345059 = fieldWeight in 3403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3403)
      0.5 = coord(1/2)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Type
    a
  9. Jia, J.: From data to knowledge : the relationships between vocabularies, linked data and knowledge graphs (2021) 0.02
    0.023921534 = product of:
      0.04784307 = sum of:
        0.04784307 = sum of:
          0.019307716 = weight(_text_:j in 106) [ClassicSimilarity], result of:
            0.019307716 = score(doc=106,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.17553353 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.0050849267 = weight(_text_:a in 106) [ClassicSimilarity], result of:
            0.0050849267 = score(doc=106,freq=8.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.12739488 = fieldWeight in 106, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
          0.023450429 = weight(_text_:22 in 106) [ClassicSimilarity], result of:
            0.023450429 = score(doc=106,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.19345059 = fieldWeight in 106, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=106)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions. Design/methodology/approach This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions. Findings Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data. Originality/value This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.
    Date
    22. 1.2021 14:24:32
    Type
    a
  10. OWL Web Ontology Language Test Cases (2004) 0.02
    0.022804346 = product of:
      0.045608692 = sum of:
        0.045608692 = product of:
          0.068413034 = sum of:
            0.030892346 = weight(_text_:j in 4685) [ClassicSimilarity], result of:
              0.030892346 = score(doc=4685,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.28085366 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
            0.037520684 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.037520684 = score(doc=4685,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    14. 8.2011 13:33:22
    Editor
    Carroll, J.J. u. J. de Roo
  11. Kless, D.: Erstellung eines allgemeinen Standards zur Wissensorganisation : Nutzen, Möglichkeiten, Herausforderungen, Wege (2010) 0.02
    0.022650305 = product of:
      0.04530061 = sum of:
        0.04530061 = sum of:
          0.019307716 = weight(_text_:j in 4422) [ClassicSimilarity], result of:
            0.019307716 = score(doc=4422,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.17553353 = fieldWeight in 4422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4422)
          0.0025424634 = weight(_text_:a in 4422) [ClassicSimilarity], result of:
            0.0025424634 = score(doc=4422,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.06369744 = fieldWeight in 4422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4422)
          0.023450429 = weight(_text_:22 in 4422) [ClassicSimilarity], result of:
            0.023450429 = score(doc=4422,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.19345059 = fieldWeight in 4422, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4422)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Type
    a
  12. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.020908467 = product of:
      0.041816935 = sum of:
        0.041816935 = sum of:
          0.015446173 = weight(_text_:j in 1634) [ClassicSimilarity], result of:
            0.015446173 = score(doc=1634,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.14042683 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.007610422 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
            0.007610422 = score(doc=1634,freq=28.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.19066721 = fieldWeight in 1634, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.018760342 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.018760342 = score(doc=1634,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Type
    a
  13. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.02
    0.019979727 = product of:
      0.039959453 = sum of:
        0.039959453 = sum of:
          0.015446173 = weight(_text_:j in 179) [ClassicSimilarity], result of:
            0.015446173 = score(doc=179,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.14042683 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.0057529383 = weight(_text_:a in 179) [ClassicSimilarity], result of:
            0.0057529383 = score(doc=179,freq=16.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.14413087 = fieldWeight in 179, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
          0.018760342 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
            0.018760342 = score(doc=179,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.15476047 = fieldWeight in 179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=179)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Type
    a
  14. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.02
    0.019953802 = product of:
      0.039907604 = sum of:
        0.039907604 = product of:
          0.059861403 = sum of:
            0.027030803 = weight(_text_:j in 4330) [ClassicSimilarity], result of:
              0.027030803 = score(doc=4330,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.24574696 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
            0.0328306 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
              0.0328306 = score(doc=4330,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.2708308 = fieldWeight in 4330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4330)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    12. 2.2011 17:35:22
  15. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.02
    0.018585186 = sum of:
      0.015708717 = product of:
        0.10996101 = sum of:
          0.10996101 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.10996101 = score(doc=701,freq=2.0), product of:
              0.2934808 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.034616705 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.14285715 = coord(1/7)
      0.0028764694 = product of:
        0.008629408 = sum of:
          0.008629408 = weight(_text_:a in 701) [ClassicSimilarity], result of:
            0.008629408 = score(doc=701,freq=36.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.2161963 = fieldWeight in 701, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.33333334 = coord(1/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  16. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.02
    0.01785271 = sum of:
      0.015708717 = product of:
        0.10996101 = sum of:
          0.10996101 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
            0.10996101 = score(doc=5820,freq=2.0), product of:
              0.2934808 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.034616705 = queryNorm
              0.3746787 = fieldWeight in 5820, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.14285715 = coord(1/7)
      0.0021439937 = product of:
        0.006431981 = sum of:
          0.006431981 = weight(_text_:a in 5820) [ClassicSimilarity], result of:
            0.006431981 = score(doc=5820,freq=20.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.16114321 = fieldWeight in 5820, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=5820)
        0.33333334 = coord(1/3)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  17. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.02
    0.017328596 = product of:
      0.03465719 = sum of:
        0.03465719 = product of:
          0.051985785 = sum of:
            0.0050849267 = weight(_text_:a in 6089) [ClassicSimilarity], result of:
              0.0050849267 = score(doc=6089,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.12739488 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
            0.046900857 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.046900857 = score(doc=6089,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Pages
    S.11-22
    Type
    a
  18. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.02
    0.017328596 = product of:
      0.03465719 = sum of:
        0.03465719 = product of:
          0.051985785 = sum of:
            0.0050849267 = weight(_text_:a in 539) [ClassicSimilarity], result of:
              0.0050849267 = score(doc=539,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.12739488 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
            0.046900857 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.046900857 = score(doc=539,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A discussion on current initiatives regarding terminology registries.
    Date
    26.12.2011 13:22:07
  19. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Hollink, L.; Huang, Z.; Kersen, J. van; Niet, M. de; Omelayenko, B.; Ossenbruggen, J. van; Siebes, R.; Taekema, J.; Wielemaker, J.; Wielinga, B.: MultimediaN E-Culture demonstrator (2006) 0.02
    0.01646316 = product of:
      0.03292632 = sum of:
        0.03292632 = product of:
          0.049389478 = sum of:
            0.04633852 = weight(_text_:j in 4648) [ClassicSimilarity], result of:
              0.04633852 = score(doc=4648,freq=8.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4212805 = fieldWeight in 4648, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
            0.0030509564 = weight(_text_:a in 4648) [ClassicSimilarity], result of:
              0.0030509564 = score(doc=4648,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.07643694 = fieldWeight in 4648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4648)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  20. Davies, J.; Fensel, D.; Harmelen, F. van: Conclusions: ontology-driven knowledge management : towards the Semantic Web? (2004) 0.02
    0.015918773 = product of:
      0.031837545 = sum of:
        0.031837545 = product of:
          0.047756318 = sum of:
            0.043688376 = weight(_text_:j in 4407) [ClassicSimilarity], result of:
              0.043688376 = score(doc=4407,freq=4.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.39718705 = fieldWeight in 4407, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4407)
            0.0040679416 = weight(_text_:a in 4407) [ClassicSimilarity], result of:
              0.0040679416 = score(doc=4407,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.10191591 = fieldWeight in 4407, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4407)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Towards the semantic Web: ontology-driven knowledge management. Eds.: J. Davies, u.a
    Type
    a

Years

Languages

  • e 439
  • d 94
  • pt 5
  • el 1
  • f 1
  • sp 1
  • More… Less…

Types

  • a 419
  • el 145
  • m 24
  • x 23
  • n 13
  • s 11
  • p 5
  • r 5
  • A 1
  • EL 1
  • More… Less…

Subjects

Classifications