Search (158 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.32
    0.3166941 = product of:
      0.6333882 = sum of:
        0.048722174 = product of:
          0.14616652 = sum of:
            0.14616652 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.14616652 = score(doc=400,freq=2.0), product of:
                0.26007444 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03067635 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.14616652 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14616652 = score(doc=400,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14616652 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14616652 = score(doc=400,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14616652 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14616652 = score(doc=400,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14616652 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14616652 = score(doc=400,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(5/10)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.29
    0.29185498 = product of:
      0.58370996 = sum of:
        0.032481454 = product of:
          0.097444355 = sum of:
            0.097444355 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.097444355 = score(doc=5820,freq=2.0), product of:
                0.26007444 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13780712 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13780712 = score(doc=5820,freq=4.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13780712 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13780712 = score(doc=5820,freq=4.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13780712 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13780712 = score(doc=5820,freq=4.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13780712 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13780712 = score(doc=5820,freq=4.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(5/10)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.26
    0.25824016 = product of:
      0.43040025 = sum of:
        0.032481454 = product of:
          0.097444355 = sum of:
            0.097444355 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.097444355 = score(doc=701,freq=2.0), product of:
                0.26007444 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.097444355 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097444355 = score(doc=701,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097444355 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097444355 = score(doc=701,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.008141369 = product of:
          0.024424106 = sum of:
            0.024424106 = weight(_text_:problem in 701) [ClassicSimilarity], result of:
              0.024424106 = score(doc=701,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.1875815 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.097444355 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097444355 = score(doc=701,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097444355 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097444355 = score(doc=701,freq=2.0), product of:
            0.26007444 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03067635 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6 = coord(6/10)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Khalifa, M.; Shen, K.N.: Applying semantic networks to hypertext design : effects on knowledge structure acquisition and problem solving (2010) 0.01
    0.009134563 = product of:
      0.04567281 = sum of:
        0.021151898 = product of:
          0.06345569 = sum of:
            0.06345569 = weight(_text_:problem in 3708) [ClassicSimilarity], result of:
              0.06345569 = score(doc=3708,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.48735106 = fieldWeight in 3708, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.33333334 = coord(1/3)
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 3708) [ClassicSimilarity], result of:
              0.07356274 = score(doc=3708,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 3708, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3708)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    One of the key objectives of knowledge management is to transfer knowledge quickly and efficiently from experts to novices, who are different in terms of the structural properties of domain knowledge or knowledge structure. This study applies experts' semantic networks to hypertext navigation design and examines the potential of the resulting design, i.e., semantic hypertext, in facilitating knowledge structure acquisition and problem solving. Moreover, we argue that the level of sophistication of the knowledge structure acquired by learners is an important mediator influencing the learning outcomes (in this case, problem solving). The research model was empirically tested with a situated experiment involving 80 business professionals. The results of the empirical study provided strong support for the effectiveness of semantic hypertext in transferring knowledge structure and reported a significant full mediating effect of knowledge structure sophistication. Both theoretical and practical implications of this research are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1673-1685
    Year
    2010
  5. Halpin, H.; Hayes, P.J.: When owl:sameAs isn't the same : an analysis of identity links on the Semantic Web (2010) 0.01
    0.008358273 = product of:
      0.041791365 = sum of:
        0.017270451 = product of:
          0.051811352 = sum of:
            0.051811352 = weight(_text_:problem in 4834) [ClassicSimilarity], result of:
              0.051811352 = score(doc=4834,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.39792046 = fieldWeight in 4834, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4834)
          0.33333334 = coord(1/3)
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 4834) [ClassicSimilarity], result of:
              0.07356274 = score(doc=4834,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 4834, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4834)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    In Linked Data, the use of owl:sameAs is ubiquitous in 'inter-linking' data-sets. However, there is a lurking suspicion within the Linked Data community that this use of owl:sameAs may be somehow incorrect, in particular with regards to its interactions with inference. In fact, owl:sameAs can be considered just one type of 'identity link', a link that declares two items to be identical in some fashion. After reviewing the definitions and history of the problem of identity in philosophy and knowledge representation, we outline four alternative readings of owl:sameAs, showing with examples how it is being (ab)used on the Web of data. Then we present possible solutions to this problem by introducing alternative identity links that rely on named graphs.
    Source
    Linked Data on the Web (LDOW2010). Proceedings of the WWW2010 Workshop on Linked Data on the Web. Raleigh, USA, April 27, 2010. Edited by Christian Bizer et al
    Year
    2010
  6. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.01
    0.0076611177 = product of:
      0.076611176 = sum of:
        0.076611176 = product of:
          0.11491676 = sum of:
            0.0858232 = weight(_text_:2010 in 3694) [ClassicSimilarity], result of:
              0.0858232 = score(doc=3694,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5849073 = fieldWeight in 3694, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
            0.029093552 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.029093552 = score(doc=3694,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    22. 7.2010 19:41:16
    Source
    Knowledge organization. 37(2010) no.3, S.173-184
    Year
    2010
  7. Stock, W.G.: Wissensrepräsentation (2010) 0.01
    0.0065389113 = product of:
      0.06538911 = sum of:
        0.06538911 = product of:
          0.19616732 = sum of:
            0.19616732 = weight(_text_:2010 in 3263) [ClassicSimilarity], result of:
              0.19616732 = score(doc=3263,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                1.336931 = fieldWeight in 3263, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.125 = fieldNorm(doc=3263)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Powerpoint-Präsentation HHU Düsseldorf - SS 2010
    Year
    2010
  8. Information and communication technologies : international conference; proceedings / ICT 2010, Kochi, Kerala, India, September 7 - 9, 2010 (2010) 0.01
    0.0063968836 = product of:
      0.06396884 = sum of:
        0.06396884 = product of:
          0.19190651 = sum of:
            0.19190651 = weight(_text_:2010 in 4784) [ClassicSimilarity], result of:
              0.19190651 = score(doc=4784,freq=25.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                1.3078926 = fieldWeight in 4784, product of:
                  5.0 = tf(freq=25.0), with freq of:
                    25.0 = termFreq=25.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4784)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    This book constitutes the proceedings of the International Conference on Information and Communication Technologies held in Kochi, Kerala, India in September 2010.
    RSWK
    Telekommunikationsnetz / Netzwerktopologie / Kongress / Cochin <Kerala, 2010>
    Informationstechnik / Kongress / Cochin <Kerala, 2010>
    Informatik / Kongress / Cochin <Kerala, 2010>
    Data Mining / Kongress / Cochin <Kerala, 2010>
    Subject
    Telekommunikationsnetz / Netzwerktopologie / Kongress / Cochin <Kerala, 2010>
    Informationstechnik / Kongress / Cochin <Kerala, 2010>
    Informatik / Kongress / Cochin <Kerala, 2010>
    Data Mining / Kongress / Cochin <Kerala, 2010>
    Year
    2010
  9. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.01
    0.006371462 = product of:
      0.063714616 = sum of:
        0.063714616 = product of:
          0.09557192 = sum of:
            0.066478364 = weight(_text_:2010 in 4792) [ClassicSimilarity], result of:
              0.066478364 = score(doc=4792,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.45306724 = fieldWeight in 4792, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
            0.029093552 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.029093552 = score(doc=4792,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Year
    2010
  10. Fischer, W.; Bauer, B.: Combining ontologies and natural language (2010) 0.01
    0.0061221616 = product of:
      0.030610807 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 3740) [ClassicSimilarity], result of:
              0.03053013 = score(doc=3740,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 3740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3740)
          0.33333334 = coord(1/3)
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 3740) [ClassicSimilarity], result of:
              0.06130229 = score(doc=3740,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 3740, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3740)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Ontologies are a popular concept for capturing semantic knowledge of the world in a computer understandable way. Todays ontological standards have been designed with primarily the logical formalisms in mind and therefore leaving the linguistic information aside. However knowledge is rarely just about the semantic information itself. In order to create and modify existing ontologies users have to be able to understand the information represented by them. Other problem domains (e.g. Natural Language Processing, NLP) can build on ontological information however a bridge to syntactic information is missing. Therefore in this paper we argue that the possibilities of todays standards like OWL, RDF, etc. are not enough to provide a sound combination of syntax and semantics. Therefore we present an approach for the linguistic enrichment of ontologies inspired by cognitive linguistics. The goal is to provide a generic, language independent approach on modelling semantics which can be annotated with arbitrary linguistic information. This knowledge can then be used for a better documentation of ontologies as well as for NLP and other Information Extraction (IE) related tasks.
    Source
    Advances in ontologies: Proceedings of the Sixth Australasian Ontology Workshop Adelaide, Australia, 7 December 2010. Eds.: K. Taylor, T.Meyer u. M.Orgun [http://krr.meraka.org.za/~aow2010/AOW2010-preproceedings.pdf]
    Year
    2010
  11. Crystal, D.: Semantic targeting : past, present, and future (2010) 0.01
    0.0061221616 = product of:
      0.030610807 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 3938) [ClassicSimilarity], result of:
              0.03053013 = score(doc=3938,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 3938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3938)
          0.33333334 = coord(1/3)
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 3938) [ClassicSimilarity], result of:
              0.06130229 = score(doc=3938,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 3938, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3938)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose - This paper seeks to explicate the notion of "semantics", especially as it is being used in the context of the internet in general and advertising in particular. Design/methodology/approach - The conception of semantics as it evolved within linguistics is placed in its historical context. In the field of online advertising, it shows the limitations of keyword-based approaches and those where a limited amount of context is taken into account (contextual advertising). A more sophisticated notion of semantic targeting is explained, in which the whole page is taken into account in arriving at a semantic categorization. This is achieved through a combination of lexicological analysis and a purpose-built semantic taxonomy. Findings - The combination of a lexical analysis (derived from a dictionary) and a taxonomy (derived from a general encyclopedia, and subsequently refined) resulted in the construction of a "sense engine", which was then applied to online advertising, Examples of the application illustrate how relevance and sensitivity (brand protection) of ad placement can be improved. Several areas of potential further application are outlined. Originality/value - This is the first systematic application of linguistics to provide a solution to the problem of inappropriate ad placement online.
    Source
    Aslib proceedings. 62(2010) nos.4/5, S.355-365
    Year
    2010
  12. Stock, W.G.: Concepts and semantic relations in information science (2010) 0.01
    0.0061221616 = product of:
      0.030610807 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 4008) [ClassicSimilarity], result of:
              0.03053013 = score(doc=4008,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 4008, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4008)
          0.33333334 = coord(1/3)
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 4008) [ClassicSimilarity], result of:
              0.06130229 = score(doc=4008,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 4008, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4008)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Concept-based information retrieval and knowledge representation are in need of a theory of concepts and semantic relations. Guidelines for the construction and maintenance of knowledge organization systems (KOS) (such as ANSI/NISO Z39.19-2005 in the U.S.A. or DIN 2331:1980 in Germany) do not consider results of concept theory and theory of relations to the full extent. They are not able to unify the currently different worlds of traditional controlled vocabularies, of the social web (tagging and folksonomies) and of the semantic web (ontologies). Concept definitions as well as semantic relations are based on epistemological theories (empiricism, rationalism, hermeneutics, pragmatism, and critical theory). A concept is determined via its intension and extension as well as by definition. We will meet the problem of vagueness by introducing prototypes. Some important definitions are concept explanations (after Aristotle) and the definition of family resemblances (in the sense of Wittgenstein). We will model concepts as frames (according to Barsalou). The most important paradigmatic relation in KOS is hierarchy, which must be arranged into different classes: Hyponymy consists of taxonomy and simple hyponymy, meronymy consists of many different part-whole-relations. For practical application purposes, the transitivity of the given relation is very important. Unspecific associative relations are of little help to our focused applications and should be replaced by generalizable and domain-specific relations. We will discuss the reflexivity, symmetry, and transitivity of paradigmatic relations as well as the appearance of specific semantic relations in the different kinds of KOS (folksonomies, nomenclatures, classification systems, thesauri, and ontologies). Finally, we will pick out KOS as a central theme of the Semantic Web.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.10, S.1951-1969
    Year
    2010
  13. Sánchez, D.; Batet, M.; Valls, A.; Gibert, K.: Ontology-driven web-based semantic similarity (2010) 0.01
    0.0061221616 = product of:
      0.030610807 = sum of:
        0.010176711 = product of:
          0.03053013 = sum of:
            0.03053013 = weight(_text_:problem in 335) [ClassicSimilarity], result of:
              0.03053013 = score(doc=335,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23447686 = fieldWeight in 335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=335)
          0.33333334 = coord(1/3)
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 335) [ClassicSimilarity], result of:
              0.06130229 = score(doc=335,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 335, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=335)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Estimation of the degree of semantic similarity/distance between concepts is a very common problem in research areas such as natural language processing, knowledge acquisition, information retrieval or data mining. In the past, many similarity measures have been proposed, exploiting explicit knowledge-such as the structure of a taxonomy-or implicit knowledge-such as information distribution. In the former case, taxonomies and/or ontologies are used to introduce additional semantics; in the latter case, frequencies of term appearances in a corpus are considered. Classical measures based on those premises suffer from some problems: in the ?rst case, their excessive dependency of the taxonomical/ontological structure; in the second case, the lack of semantics of a pure statistical analysis of occurrences and/or the ambiguity of estimating concept statistical distribution from term appearances. Measures based on Information Content (IC) of taxonomical concepts combine both approaches. However, they heavily depend on a properly pre-tagged and disambiguated corpus according to the ontological entities in order to computer accurate concept appearance probabilities. This limits the applicability of those measures to other ontologies - like specific domain ontologies - and massive corpus - like the Web. In this paper, several of the presente issues are analyzed. Modifications of classical similarity measures are also proposed. They are based on a contextualized and scalable version of IC computation in the Web by exploiting taxonomical knowledge. The goal is to avoid the measures' dependency on the corpus pre-processing to achieve reliable results and minimize language ambiguity. Our proposals are able to outperform classical approaches when using the Web for estimating concept probabilities.
    Source
    Journal of intelligent information systems. 35(2010) no.x, S.383-413
    Year
    2010
  14. Assem, M. van: Converting and integrating vocabularies for the Semantic Web (2010) 0.01
    0.0060897092 = product of:
      0.030448545 = sum of:
        0.014101266 = product of:
          0.042303797 = sum of:
            0.042303797 = weight(_text_:problem in 4639) [ClassicSimilarity], result of:
              0.042303797 = score(doc=4639,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.32490072 = fieldWeight in 4639, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
        0.016347278 = product of:
          0.04904183 = sum of:
            0.04904183 = weight(_text_:2010 in 4639) [ClassicSimilarity], result of:
              0.04904183 = score(doc=4639,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.33423275 = fieldWeight in 4639, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4639)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This thesis focuses on conversion of vocabularies for representation and integration of collections on the Semantic Web. A secondary focus is how to represent metadata schemas (RDF Schemas representing metadata element sets) such that they interoperate with vocabularies. The primary domain in which we operate is that of cultural heritage collections. The background worldview in which a solution is sought is that of the Semantic Web research paradigmwith its associated theories, methods, tools and use cases. In other words, we assume the SemanticWeb is in principle able to provide the context to realize interoperable collections. Interoperability is dependent on the interplay between representations and the applications that use them. We mean applications in the widest sense, such as "search" and "annotation". These applications or tasks are often present in software applications, such as the E-Culture application. It is therefore necessary that applications requirements on the vocabulary representation are met. This leads us to formulate the following problem statement: HOW CAN EXISTING VOCABULARIES BE MADE AVAILABLE TO SEMANTIC WEB APPLICATIONS?
    We refine the problem statement into three research questions. The first two focus on the problem of conversion of a vocabulary to a Semantic Web representation from its original format. Conversion of a vocabulary to a representation in a Semantic Web language is necessary to make the vocabulary available to SemanticWeb applications. In the last question we focus on integration of collection metadata schemas in a way that allows for vocabulary representations as produced by our methods. Academisch proefschrift ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, Dutch Research School for Information and Knowledge Systems.
    Series
    SIKS Dissertation Series No. 2010-40
    Year
    2010
  15. Nagao, M.: Knowledge and inference (1990) 0.01
    0.0056890245 = product of:
      0.02844512 = sum of:
        0.0143920425 = product of:
          0.043176126 = sum of:
            0.043176126 = weight(_text_:problem in 3304) [ClassicSimilarity], result of:
              0.043176126 = score(doc=3304,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.33160037 = fieldWeight in 3304, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.33333334 = coord(1/3)
        0.014053079 = product of:
          0.042159237 = sum of:
            0.042159237 = weight(_text_:1990 in 3304) [ClassicSimilarity], result of:
              0.042159237 = score(doc=3304,freq=3.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3049333 = fieldWeight in 3304, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
    Year
    1990
  16. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.01
    0.0054722265 = product of:
      0.054722264 = sum of:
        0.054722264 = product of:
          0.0820834 = sum of:
            0.06130229 = weight(_text_:2010 in 3466) [ClassicSimilarity], result of:
              0.06130229 = score(doc=3466,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 3466, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
            0.02078111 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
              0.02078111 = score(doc=3466,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.19345059 = fieldWeight in 3466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    1. 6.2010 9:55:22
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1144-1165
    Year
    2010
  17. Hohmann, G.: ¬Die Anwendung des CIDOC-CRM für die semantische Wissensrepräsentation in den Kulturwissenschaften (2010) 0.01
    0.0054612528 = product of:
      0.054612525 = sum of:
        0.054612525 = product of:
          0.08191878 = sum of:
            0.056981456 = weight(_text_:2010 in 4011) [ClassicSimilarity], result of:
              0.056981456 = score(doc=4011,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38834336 = fieldWeight in 4011, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4011)
            0.02493733 = weight(_text_:22 in 4011) [ClassicSimilarity], result of:
              0.02493733 = score(doc=4011,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 4011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4011)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Year
    2010
  18. Semenova, E.: Ontologie als Begriffssystem : Theoretische Überlegungen und ihre praktische Umsetzung bei der Entwicklung einer Ontologie der Wissenschaftsdisziplinen (2010) 0.01
    0.0054612528 = product of:
      0.054612525 = sum of:
        0.054612525 = product of:
          0.08191878 = sum of:
            0.056981456 = weight(_text_:2010 in 4095) [ClassicSimilarity], result of:
              0.056981456 = score(doc=4095,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38834336 = fieldWeight in 4095, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4095)
            0.02493733 = weight(_text_:22 in 4095) [ClassicSimilarity], result of:
              0.02493733 = score(doc=4095,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 4095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4095)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
    Year
    2010
  19. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0054612528 = product of:
      0.054612525 = sum of:
        0.054612525 = product of:
          0.08191878 = sum of:
            0.056981456 = weight(_text_:2010 in 4649) [ClassicSimilarity], result of:
              0.056981456 = score(doc=4649,freq=3.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38834336 = fieldWeight in 4649, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.02493733 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.02493733 = score(doc=4649,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    26.12.2011 13:40:22
    Year
    2010
  20. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.01
    0.0052496144 = product of:
      0.052496143 = sum of:
        0.052496143 = product of:
          0.15748842 = sum of:
            0.15748842 = weight(_text_:2010 in 4706) [ClassicSimilarity], result of:
              0.15748842 = score(doc=4706,freq=33.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                1.0733243 = fieldWeight in 4706, product of:
                  5.7445626 = tf(freq=33.0), with freq of:
                    33.0 = termFreq=33.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4706)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.
    RSWK
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Subject
    Semantic Web / Kongress / Schanghai <2010>
    Semantic Web / Ontologie <Wissensverarbeitung> / Kongress / Schanghai <2010>
    Semantic Web / Datenverwaltung / Wissensmanagement / Kongress / Schanghai <2010>
    Semantic Web / Anwendungssystem / Kongress / Schanghai <2010>
    Semantic Web / World Wide Web 2.0 / Kongress / Schanghai <2010>
    Year
    2010

Years

Languages

  • e 121
  • d 32
  • f 1
  • pt 1
  • More… Less…

Types

  • a 115
  • el 39
  • x 16
  • m 9
  • s 4
  • n 1
  • r 1
  • More… Less…

Subjects