Search (78 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  • × year_i:[2010 TO 2020}
  1. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z.: Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things (2017) 0.06
    0.05572748 = product of:
      0.11145496 = sum of:
        0.08408859 = weight(_text_:description in 306) [ClassicSimilarity], result of:
          0.08408859 = score(doc=306,freq=4.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.36323205 = fieldWeight in 306, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=306)
        0.027366372 = product of:
          0.054732744 = sum of:
            0.054732744 = weight(_text_:access in 306) [ClassicSimilarity], result of:
              0.054732744 = score(doc=306,freq=6.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.3243113 = fieldWeight in 306, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Internet of Things (IoT) brings the third development wave of the global information industry which makes users, network and perception devices cooperate more closely. However, if IoT has security problems, it may cause a variety of damage and even threaten human lives and properties. To improve the abilities of monitoring, providing emergency response and predicting the development trend of IoT security, a new paradigm called network security situation awareness (NSSA) is proposed. However, it is limited by its ability to mine and evaluate security situation elements from multi-source heterogeneous network security information. To solve this problem, this paper proposes an IoT network security situation awareness model using situation reasoning method based on semantic ontology and user-defined rules. Ontology technology can provide a unified and formalized description to solve the problem of semantic heterogeneity in the IoT security domain. In this paper, four key sub-domains are proposed to reflect an IoT security situation: context, attack, vulnerability and network flow. Further, user-defined rules can compensate for the limited description ability of ontology, and hence can enhance the reasoning ability of our proposed ontology model. The examples in real IoT scenarios show that the ability of the network security situation awareness that adopts our situation reasoning method is more comprehensive and more powerful reasoning abilities than the traditional NSSA methods. [http://ieeexplore.ieee.org/abstract/document/7999187/]
    Content
    DOI 10.1109/ACCESS.2017.2734681.
    Source
    IEEE Access. 10.1109/ACCESS.2017.2734681, 5, (21046-21056) [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7999187]
  2. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.05
    0.04579494 = product of:
      0.09158988 = sum of:
        0.071351536 = weight(_text_:description in 2024) [ClassicSimilarity], result of:
          0.071351536 = score(doc=2024,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 2024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=2024)
        0.020238347 = product of:
          0.040476695 = sum of:
            0.040476695 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
              0.040476695 = score(doc=2024,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.23214069 = fieldWeight in 2024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  3. Soergel, D.: Towards a relation ontology for the Semantic Web (2011) 0.05
    0.045155756 = product of:
      0.09031151 = sum of:
        0.071351536 = weight(_text_:description in 4342) [ClassicSimilarity], result of:
          0.071351536 = score(doc=4342,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 4342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=4342)
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 4342) [ClassicSimilarity], result of:
              0.037919957 = score(doc=4342,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 4342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4342)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web consists of data structured for use by computer programs, such as data sets made available under the Linked Open Data initiative. Much of this data is structured following the entity-relationship model encoded in RDF for syntactic interoperability. For semantic interoperability, the semantics of the relationships used in any given dataset needs to be made explicit. Ultimately this requires an inventory of these relationships structured around a relation ontology. This talk will outline a blueprint for such an inventory, including a format for the description/definition of binary and n-ary relations, drawing on ideas put forth in the classification and thesaurus community over the last 60 years, upper level ontologies, systems like FrameNet, the Buffalo Relation Ontology, and an analysis of linked data sets.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  4. Sperber, W.; Ion, P.D.F.: Content analysis and classification in mathematics (2011) 0.05
    0.045155756 = product of:
      0.09031151 = sum of:
        0.071351536 = weight(_text_:description in 4818) [ClassicSimilarity], result of:
          0.071351536 = score(doc=4818,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 4818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=4818)
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 4818) [ClassicSimilarity], result of:
              0.037919957 = score(doc=4818,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 4818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4818)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The number of publications in mathematics increases faster each year. Presently far more than 100,000 mathematically relevant journal articles and books are published annually. Efficient and high-quality content analysis of this material is important for mathematical bibliographic services such as ZBMath or MathSciNet. Content analysis has different facets and levels: classification, keywords, abstracts and reviews, and (in the future) formula analysis. It is the opinion of the authors that the different levels have to be enhanced and combined using the methods and technology of the Semantic Web. In the presentation, the problems and deficits of the existing methods and tools, the state of the art and current activities are discussed. As a first step, the Mathematical Subject Classification Scheme (MSC), has been encoded with Simple Knowledge Organization System (SKOS) and Resource Description Framework (RDF) at its recent revision to MSC2010. The use of SKOS principally opens new possibilities for the enrichment and wider deployment of this classification scheme and for machine-based content analysis of mathematical publications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  5. Marcondes, C.H.; Costa, L.C da.: ¬A model to represent and process scientific knowledge in biomedical articles with semantic Web technologies (2016) 0.04
    0.03816245 = product of:
      0.0763249 = sum of:
        0.05945961 = weight(_text_:description in 2829) [ClassicSimilarity], result of:
          0.05945961 = score(doc=2829,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 2829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2829)
        0.01686529 = product of:
          0.03373058 = sum of:
            0.03373058 = weight(_text_:22 in 2829) [ClassicSimilarity], result of:
              0.03373058 = score(doc=2829,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.19345059 = fieldWeight in 2829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2829)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge organization faces the challenge of managing the amount of knowledge available on the Web. Published literature in biomedical sciences is a huge source of knowledge, which can only efficiently be managed through automatic methods. The conventional channel for reporting scientific results is Web electronic publishing. Despite its advances, scientific articles are still published in print formats such as portable document format (PDF). Semantic Web and Linked Data technologies provides new opportunities for communicating, sharing, and integrating scientific knowledge that can overcome the limitations of the current print format. Here is proposed a semantic model of scholarly electronic articles in biomedical sciences that can overcome the limitations of traditional flat records formats. Scientific knowledge consists of claims made throughout article texts, especially when semantic elements such as questions, hypotheses and conclusions are stated. These elements, although having different roles, express relationships between phenomena. Once such knowledge units are extracted and represented with technologies such as RDF (Resource Description Framework) and linked data, they may be integrated in reasoning chains. Thereby, the results of scientific research can be published and shared in structured formats, enabling crawling by software agents, semantic retrieval, knowledge reuse, validation of scientific results, and identification of traces of scientific discoveries.
    Date
    12. 3.2016 13:17:22
  6. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.03
    0.033635437 = product of:
      0.13454175 = sum of:
        0.13454175 = weight(_text_:description in 1075) [ClassicSimilarity], result of:
          0.13454175 = score(doc=1075,freq=4.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.5811713 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
      0.25 = coord(1/4)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  7. Frické, M.: Logic and the organization of information (2012) 0.03
    0.032817632 = product of:
      0.065635264 = sum of:
        0.04162173 = weight(_text_:description in 1782) [ClassicSimilarity], result of:
          0.04162173 = score(doc=1782,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.17979069 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
        0.024013536 = weight(_text_:26 in 1782) [ClassicSimilarity], result of:
          0.024013536 = score(doc=1782,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.13656367 = fieldWeight in 1782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1782)
      0.5 = coord(2/4)
    
    Date
    16. 3.2012 11:26:29
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
  8. Kohne, J.: Ontology, its origins and its meaning in information icience (2014) 0.03
    0.028324801 = product of:
      0.056649603 = sum of:
        0.03430505 = weight(_text_:26 in 3401) [ClassicSimilarity], result of:
          0.03430505 = score(doc=3401,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.19509095 = fieldWeight in 3401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3401)
        0.02234455 = product of:
          0.0446891 = sum of:
            0.0446891 = weight(_text_:access in 3401) [ClassicSimilarity], result of:
              0.0446891 = score(doc=3401,freq=4.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.26479906 = fieldWeight in 3401, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3401)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontology - in Aristotelian terms the science of being qua being - as a classical branch of philosophy describes the foundations of being in general. In this context, ontology is general metaphysics: the science of everything. Pursuing ontology means establishing some systematic order among the being, i.e. dividing things into categories or conceptual frameworks. Explaining the reasons why there are things or even anything, however, is part of what is called special metaphysics (theology, cosmology and psychology). If putting things into categories is the key issue of ontology, then general structures are its main level of analysis. To categorize things is to put them into a structural order. Such categorization of things enables one to understand what reality is about. If this is true, and characterizing the general structures of being is a reasonable access for us to reality, then two kinds of analysis of those structures are available: (i) realism and (ii) nominalism. In a realist (Aristotelian) ontology the general structures of being are understood as a kind of mirror reflecting things in their natural order. Those categories, as they are called in realism, then represent or show the structure of being. Ontological realism understands the relation between categories and being as a kind of correspondence or mapping which gives access to reality itself.
    Date
    26. 1.2017 9:53:39
  9. Kiren, T.; Shoaib, M.: ¬A novel ontology matching approach using key concepts (2016) 0.03
    0.02558517 = product of:
      0.05117034 = sum of:
        0.03430505 = weight(_text_:26 in 2589) [ClassicSimilarity], result of:
          0.03430505 = score(doc=2589,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.19509095 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2589)
        0.01686529 = product of:
          0.03373058 = sum of:
            0.03373058 = weight(_text_:22 in 2589) [ClassicSimilarity], result of:
              0.03373058 = score(doc=2589,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.19345059 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:47:26
  10. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.03
    0.02558517 = product of:
      0.05117034 = sum of:
        0.03430505 = weight(_text_:26 in 3403) [ClassicSimilarity], result of:
          0.03430505 = score(doc=3403,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.19509095 = fieldWeight in 3403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.01686529 = product of:
          0.03373058 = sum of:
            0.03373058 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
              0.03373058 = score(doc=3403,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.19345059 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    26. 1.2017 9:53:39
    9. 2.2017 19:22:14
  11. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.02
    0.023525903 = product of:
      0.09410361 = sum of:
        0.09410361 = sum of:
          0.053626917 = weight(_text_:access in 987) [ClassicSimilarity], result of:
            0.053626917 = score(doc=987,freq=4.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.31775886 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.040476695 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.040476695 = score(doc=987,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.25 = coord(1/4)
    
    Date
    23. 7.2017 13:49:22
    LCSH
    World Wide Web / Subject access
    Subject
    World Wide Web / Subject access
  12. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.02
    0.01977069 = product of:
      0.07908276 = sum of:
        0.07908276 = product of:
          0.23724826 = sum of:
            0.23724826 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.23724826 = score(doc=400,freq=2.0), product of:
                0.42213637 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04979191 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  13. Schulz, S.; Schober, D.; Tudose, I.; Stenzhorn, H.: ¬The pitfalls of thesaurus ontologization : the case of the NCI thesaurus (2010) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 4885) [ClassicSimilarity], result of:
          0.071351536 = score(doc=4885,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 4885, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=4885)
      0.25 = coord(1/4)
    
    Abstract
    Thesauri that are "ontologized" into OWL-DL semantics are highly amenable to modeling errors resulting from falsely interpreting existential restrictions. We investigated the OWL-DL representation of the NCI Thesaurus (NCIT) in order to assess the correctness of existential restrictions. A random sample of 354 axioms using the someValuesFrom operator was taken. According to a rating performed by two domain experts, roughly half of these examples, and in consequence more than 76,000 axioms in the OWL-DL version, make incorrect assertions if interpreted according to description logics semantics. These axioms therefore constitute a huge source for unintended models, rendering most logic-based reasoning unreliable. After identifying typical error patterns we discuss some possible improvements. Our recommendation is to either amend the problematic axioms in the OWL-DL formalization or to consider some less strict representational format.
  14. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.02
    0.016332638 = product of:
      0.06533055 = sum of:
        0.06533055 = sum of:
          0.031599965 = weight(_text_:access in 3466) [ClassicSimilarity], result of:
            0.031599965 = score(doc=3466,freq=2.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.18724121 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
          0.03373058 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
            0.03373058 = score(doc=3466,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.19345059 = fieldWeight in 3466, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3466)
      0.25 = coord(1/4)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
  15. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.02
    0.016332638 = product of:
      0.06533055 = sum of:
        0.06533055 = sum of:
          0.031599965 = weight(_text_:access in 2831) [ClassicSimilarity], result of:
            0.031599965 = score(doc=2831,freq=2.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.18724121 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
          0.03373058 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
            0.03373058 = score(doc=2831,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.19345059 = fieldWeight in 2831, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2831)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  16. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 4705) [ClassicSimilarity], result of:
          0.05945961 = score(doc=4705,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 4705, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4705)
      0.25 = coord(1/4)
    
    Abstract
    Companies, governmental agencies and scientists produce a large amount of quantitative (research) data, consisting of measurements ranging from e.g. the surface temperatures of an ocean to the viscosity of a sample of mayonnaise. Such measurements are stored in tables in e.g. spreadsheet files and research reports. To integrate and reuse such data, it is necessary to have a semantic description of the data. However, the notation used is often ambiguous, making automatic interpretation and conversion to RDF or other suitable format diffiult. For example, the table header cell "f(Hz)" refers to frequency measured in Hertz, but the symbol "f" can also refer to the unit farad or the quantities force or luminous flux. Current annotation tools for this task either work on less ambiguous data or perform a more limited task. We introduce new disambiguation strategies based on an ontology, which allows to improve performance on "sloppy" datasets not yet targeted by existing systems.
  17. Iorio, A. di; Peroni, S.; Vitali, F.: ¬A Semantic Web approach to everyday overlapping markup (2011) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 4749) [ClassicSimilarity], result of:
          0.05945961 = score(doc=4749,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 4749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4749)
      0.25 = coord(1/4)
    
    Abstract
    Overlapping structures in XML are not symptoms of a misunderstanding of the intrinsic characteristics of a text document nor evidence of extreme scholarly requirements far beyond those needed by the most common XML-based applications. On the contrary, overlaps have started to appear in a large number of incredibly popular applications hidden under the guise of syntactical tricks to the basic hierarchy of the XML data format. Unfortunately, syntactical tricks have the drawback that the affected structures require complicated workarounds to support even the simplest query or usage. In this article, we present Extremely Annotational Resource Description Framework (RDF) Markup (EARMARK), an approach to overlapping markup that simplifies and streamlines the management of multiple hierarchies on the same content, and provides an approach to sophisticated queries and usages over such structures without the need of ad-hoc applications, simply by using Semantic Web tools and languages. We compare how relevant tasks (e.g., the identification of the contribution of an author in a word processor document) are of some substantial complexity when using the original data format and become more or less trivial when using EARMARK. We finally evaluate positively the memory and disk requirements of EARMARK documents in comparison to Open Office and Microsoft Word XML-based formats.
  18. Guns, R.: Tracing the origins of the semantic web (2013) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 1093) [ClassicSimilarity], result of:
          0.05945961 = score(doc=1093,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 1093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1093)
      0.25 = coord(1/4)
    
    Abstract
    The Semantic Web has been criticized for not being semantic. This article examines the questions of why and how the Web of Data, expressed in the Resource Description Framework (RDF), has come to be known as the Semantic Web. Contrary to previous papers, we deliberately take a descriptive stance and do not start from preconceived ideas about the nature of semantics. Instead, we mainly base our analysis on early design documents of the (Semantic) Web. The main determining factor is shown to be link typing, coupled with the influence of online metadata. Both factors already were present in early web standards and drafts. Our findings indicate that the Semantic Web is directly linked to older artificial intelligence work, despite occasional claims to the contrary. Because of link typing, the Semantic Web can be considered an example of a semantic network. Originally network representations of the meaning of natural language utterances, semantic networks have eventually come to refer to any networks with typed (usually directed) links. We discuss possible causes for this shift and suggest that it may be due to confounding paradigmatic and syntagmatic semantic relations.
  19. Aparecida Moura, M.: Emerging discursive formations, folksonomy and social semantic information spaces (SSIS) : the contributions of the theory of integrative levels in the studies carried out by the Classification Research Group (CRG) (2014) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 1395) [ClassicSimilarity], result of:
          0.05945961 = score(doc=1395,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 1395, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1395)
      0.25 = coord(1/4)
    
    Abstract
    This paper focuses on the discursive formations emerging from the Social Semantic Information Spaces (SSIS) in light of the concept of emergence in the theory of integrative levels. The study aims to identify the opportunities and challenges of incorporating epistemological considerations in the act of acquiring knowledge into the consolidation of knowledge organization and mediation processes and devices in the emergence of phenomena. The goal was to analyze the effects of that concept on the actions of a sample of researchers registered in an emerging research domain in SSIS in order to understand this type of indexing done by the users and communities as a classification of integrating levels. The methodology was established by triangulation through social network analysis, consensus analysis and archaeology of knowledge. It was possible to conclude that there is a collective effort to settle a semantic interoperability model for the labeling of contents based on best practices regarding the description of the objects shared in SSIS.
  20. Netto, C.M.; Borém de Oliveira Lima, G.A.; Pierozzi Júnior, I.: ¬An application of facet analysis theory and concept maps for faceted search in a domain ontology : preliminary studies (2016) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 2966) [ClassicSimilarity], result of:
          0.05945961 = score(doc=2966,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 2966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2966)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents partial results of a research project still in development that aims to study the theory of faceted analysis and concept maps for faceted search in a domain ontology. The research shows a solution that enables abstraction levels for users in order to retrieve information in the domain area represented in ontology. The problem is considered a challenge, because of the formal computational structure of semantic description of ontology that does not present itself as feasible from the point of view of its users' cognition. This paper exposes the results using a web tool prototype for faceted navigation in a sample of ontology that was created for the organization of the domain knowledge, regarding the impact of agriculture and climatic changes on water resources. The results show the feasibility of navigation and information retrieval in the ontology using the web faceted prototype. It is believed that through this study a computational solution that can be developed is able to facilitate the creation of the conceptual model for faceted and concept map navigation around the area represented by ontology so human learning on the domain can be assessed, as well as the recovery and sharing of information on user groups.

Authors

Types

  • a 63
  • el 13
  • m 9
  • s 3
  • x 2
  • More… Less…

Subjects