Search (57 results, page 1 of 3)

  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.05
    0.054590274 = product of:
      0.10918055 = sum of:
        0.0953277 = weight(_text_:processing in 4121) [ClassicSimilarity], result of:
          0.0953277 = score(doc=4121,freq=6.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.54227555 = fieldWeight in 4121, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.013852848 = product of:
          0.04155854 = sum of:
            0.04155854 = weight(_text_:29 in 4121) [ClassicSimilarity], result of:
              0.04155854 = score(doc=4121,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 4121, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4121)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
    Date
    29. 6.2015 14:51:28
  2. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.03
    0.029471014 = product of:
      0.058942027 = sum of:
        0.04717497 = weight(_text_:processing in 4820) [ClassicSimilarity], result of:
          0.04717497 = score(doc=4820,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 4820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.011767056 = product of:
          0.035301168 = sum of:
            0.035301168 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.035301168 = score(doc=4820,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  3. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 5365) [ClassicSimilarity], result of:
          0.04717497 = score(doc=5365,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 5365) [ClassicSimilarity], result of:
              0.019974224 = score(doc=5365,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  4. Gayathri, R.; Uma, V.: Ontology based knowledge representation technique, domain modeling languages and planners for robotic path planning : a survey (2018) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 5605) [ClassicSimilarity], result of:
          0.04717497 = score(doc=5605,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 5605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5605)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 5605) [ClassicSimilarity], result of:
              0.019974224 = score(doc=5605,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 5605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5605)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Representation and Reasoning (KR & R) has become one of the promising fields of Artificial Intelligence. KR is dedicated towards representing information about the domain that can be utilized in path planning. Ontology based knowledge representation and reasoning techniques provide sophisticated knowledge about the environment for processing tasks or methods. Ontology helps in representing the knowledge about environment, events and actions that help in path planning and making robots more autonomous. Knowledge reasoning techniques can infer new conclusion and thus aids planning dynamically in a non-deterministic environment. In the initial sections, the representation of knowledge using ontology and the techniques for reasoning that could contribute in path planning are discussed in detail. In the following section, we also provide comparison of various planning domain modeling languages, ontology editors, planners and robot simulation tools.
    Source
    ICT express. 4(2018), no.2, S.69-74 [https://www.sciencedirect.com/science/article/pii/S2405959518300985]
  5. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.02
    0.017944349 = product of:
      0.035888698 = sum of:
        0.03144998 = weight(_text_:processing in 1154) [ClassicSimilarity], result of:
          0.03144998 = score(doc=1154,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 1154) [ClassicSimilarity], result of:
              0.01331615 = score(doc=1154,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1154)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Imprint
    Roskilde : Roskilde University, Computer Science Section
  6. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.017944349 = product of:
      0.035888698 = sum of:
        0.03144998 = weight(_text_:processing in 1004) [ClassicSimilarity], result of:
          0.03144998 = score(doc=1004,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.17890452 = fieldWeight in 1004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
        0.004438717 = product of:
          0.01331615 = sum of:
            0.01331615 = weight(_text_:science in 1004) [ClassicSimilarity], result of:
              0.01331615 = score(doc=1004,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.11641272 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  7. Siebers, Q.H.J.F.: Implementing inference rules in the Topic maps model (2006) 0.02
    0.016678872 = product of:
      0.06671549 = sum of:
        0.06671549 = weight(_text_:processing in 4730) [ClassicSimilarity], result of:
          0.06671549 = score(doc=4730,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 4730, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=4730)
      0.25 = coord(1/4)
    
    Abstract
    This paper supplies a theoretical approach on implementing inference rules in the Topic Maps model. Topic Maps is an ISO standard that allows for the modeling and representation of knowledge in an interchangeable form, that can be extended by inference rules. These rules specify conditions for inferrable facts. Any implementation requires a syntax for storage in a file, a storage model and method for processing and a system to keep track of changes in the inferred facts. The most flexible and optimisable storage model is a controlled cache, giving options for processing. Keeping track of changes is done by listeners. One of the most powerful applications of inference rules in Topic Maps is interoperability. By mapping ontologies to each other using inference rules as converter, it is possible to exchange extendable knowledge. Any implementation must choose methods and options optimized for the system it runs on, with the facilities available. Further research is required to analyze optimization problems between options.
  8. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.02
    0.015701305 = product of:
      0.03140261 = sum of:
        0.027518734 = weight(_text_:processing in 3062) [ClassicSimilarity], result of:
          0.027518734 = score(doc=3062,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.15654145 = fieldWeight in 3062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.0038838773 = product of:
          0.011651631 = sum of:
            0.011651631 = weight(_text_:science in 3062) [ClassicSimilarity], result of:
              0.011651631 = score(doc=3062,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.101861134 = fieldWeight in 3062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  9. Griffiths, T.L.; Steyvers, M.: ¬A probabilistic approach to semantic representation (2002) 0.02
    0.015633509 = product of:
      0.062534034 = sum of:
        0.062534034 = product of:
          0.09380105 = sum of:
            0.0266323 = weight(_text_:science in 3671) [ClassicSimilarity], result of:
              0.0266323 = score(doc=3671,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23282544 = fieldWeight in 3671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
            0.06716875 = weight(_text_:29 in 3671) [ClassicSimilarity], result of:
              0.06716875 = score(doc=3671,freq=4.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.43971092 = fieldWeight in 3671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3671)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Content
    Paper, Proceedings of the 24th Annual Conference of the Cognitive Science Society. Vgl. auch: https://cocosci.berkeley.edu/publications.php?author=Steyvers,%20M.
    Date
    29. 6.2015 14:55:01
    29. 6.2015 16:09:05
  10. Pepper, S.; Groenmo, G.O.: Towards a general theory of scope (2002) 0.01
    0.01389906 = product of:
      0.05559624 = sum of:
        0.05559624 = weight(_text_:processing in 539) [ClassicSimilarity], result of:
          0.05559624 = score(doc=539,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 539, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=539)
      0.25 = coord(1/4)
    
    Abstract
    This paper is concerned with the issue of scope in topic maps. Topic maps are a form of knowledge representation suitable for solving a number of complex problems in the area of information management, ranging from findability (navigation and querying) to knowledge management and enterprise application integration (EAI). The topic map paradigm has its roots in efforts to understand the essential semantics of back-of-book indexes in order that they might be captured in a form suitable for computer processing. Once understood, the model of a back-of-book index was generalised in order to cover the needs of digital information, and extended to encompass glossaries and thesauri, as well as indexes. The resulting core model, of typed topics, associations, and occurrences, has many similarities with the semantic networks developed by the artificial intelligence community for representing knowledge structures. One key requirement of topic maps from the earliest days was to be able to merge indexes from disparate origins. This requirement accounts for two further concepts that greatly enhance the power of topic maps: subject identity and scope. This paper concentrates on scope, but also includes a brief discussion of the feature known as the topic naming constraint, with which it is closely related. It is based on the authors' experience in creating topic maps (in particular, the Italian Opera Topic Map, and in implementing processing systems for topic maps (in particular, the Ontopia Topic Map Engine and Navigator.
  11. Bast, H.; Bäurle, F.; Buchhold, B.; Haussmann, E.: Broccoli: semantic full-text search at your fingertips (2012) 0.01
    0.01389906 = product of:
      0.05559624 = sum of:
        0.05559624 = weight(_text_:processing in 704) [ClassicSimilarity], result of:
          0.05559624 = score(doc=704,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3162615 = fieldWeight in 704, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=704)
      0.25 = coord(1/4)
    
    Abstract
    We present Broccoli, a fast and easy-to-use search engine forwhat we call semantic full-text search. Semantic full-textsearch combines the capabilities of standard full-text searchand ontology search. The search operates on four kinds ofobjects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g.,Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (32 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas, see: http://broccoli.informatik.uni-freiburg.de.
  12. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 4733) [ClassicSimilarity], result of:
          0.05503747 = score(doc=4733,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 4733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4733)
      0.25 = coord(1/4)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  13. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.01
    0.013759367 = product of:
      0.05503747 = sum of:
        0.05503747 = weight(_text_:processing in 572) [ClassicSimilarity], result of:
          0.05503747 = score(doc=572,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
      0.25 = coord(1/4)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  14. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.011820463 = product of:
      0.047281854 = sum of:
        0.047281854 = product of:
          0.07092278 = sum of:
            0.03562161 = weight(_text_:29 in 4649) [ClassicSimilarity], result of:
              0.03562161 = score(doc=4649,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
            0.035301168 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.035301168 = score(doc=4649,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 7.2011 14:44:56
    26.12.2011 13:40:22
  15. Assem, M. van; Menken, M.R.; Schreiber, G.; Wielemaker, J.; Wielinga, B.: ¬A method for converting thesauri to RDF/OWL (2004) 0.01
    0.010810301 = product of:
      0.043241203 = sum of:
        0.043241203 = product of:
          0.064861804 = sum of:
            0.023303263 = weight(_text_:science in 4644) [ClassicSimilarity], result of:
              0.023303263 = score(doc=4644,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
            0.04155854 = weight(_text_:29 in 4644) [ClassicSimilarity], result of:
              0.04155854 = score(doc=4644,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.27205724 = fieldWeight in 4644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4644)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 7.2011 14:44:56
    Series
    Lecture notes in computer science; no.3298
  16. Schreiber, G.; Amin, A.; Assem, M. van; Boer, V. de; Hardman, L.; Hildebrand, M.; Omelayenko, B.; Ossenbruggen, J. van; Wielemaker, J.; Wielinga, B.; Tordai, A.; Aroyoa, L.: Semantic annotation and search of cultural-heritage collections : the MultimediaN E-Culture demonstrator (2008) 0.01
    0.010644905 = product of:
      0.04257962 = sum of:
        0.04257962 = product of:
          0.06386943 = sum of:
            0.02824782 = weight(_text_:science in 4646) [ClassicSimilarity], result of:
              0.02824782 = score(doc=4646,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.24694869 = fieldWeight in 4646, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4646)
            0.03562161 = weight(_text_:29 in 4646) [ClassicSimilarity], result of:
              0.03562161 = score(doc=4646,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 4646, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4646)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Content
    Vgl. unter: http://www.sciencedirect.com/science/article/pii/S1570826808000620. Auch unter: http://www.cs.vu.nl/~mark/papers/Schreiber08a.pdf. The online version of the demonstrator can be found at: http://e-culture.multimedian.nl/demo/search.
    Date
    29. 7.2011 14:44:56
    Source
    Web Semantics: Science, Services and Agents on the WorldWideWeb 6(2008) no.4, S.243-249
  17. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.01
    0.00982812 = product of:
      0.03931248 = sum of:
        0.03931248 = weight(_text_:processing in 2861) [ClassicSimilarity], result of:
          0.03931248 = score(doc=2861,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.25 = coord(1/4)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  18. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.01
    0.00982812 = product of:
      0.03931248 = sum of:
        0.03931248 = weight(_text_:processing in 51) [ClassicSimilarity], result of:
          0.03931248 = score(doc=51,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 51, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
      0.25 = coord(1/4)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  19. Favato Barcelos, P.P.; Sales, T.P.; Fumagalli, M.; Guizzardi, G.; Valle Sousa, I.; Fonseca, C.M.; Romanenko, E.; Kritz, J.: ¬A FAIR model catalog for ontology-driven conceptual modeling research (2022) 0.01
    0.00982812 = product of:
      0.03931248 = sum of:
        0.03931248 = weight(_text_:processing in 756) [ClassicSimilarity], result of:
          0.03931248 = score(doc=756,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=756)
      0.25 = coord(1/4)
    
    Abstract
    Conceptual models are artifacts representing conceptualizations of particular domains. Hence, multi-domain model catalogs serve as empirical sources of knowledge and insights about specific domains, about the use of a modeling language's constructs, as well as about the patterns and anti-patterns recurrent in the models of that language crosscutting different domains. However, to support domain and language learning, model reuse, knowledge discovery for humans, and reliable automated processing and analysis by machines, these catalogs must be built following generally accepted quality requirements for scientific data management. Especially, all scientific (meta)data-including models-should be created using the FAIR principles (Findability, Accessibility, Interoperability, and Reusability). In this paper, we report on the construction of a FAIR model catalog for Ontology-Driven Conceptual Modeling research, a trending paradigm lying at the intersection of conceptual modeling and ontology engineering in which the Unified Foundational Ontology (UFO) and OntoUML emerged among the most adopted technologies. In this initial release, the catalog includes over a hundred models, developed in a variety of contexts and domains. The paper also discusses the research implications for (ontology-driven) conceptual modeling of such a resource.
  20. Assem, M. van; Rijgersberg, H.; Wigham, M.; Top, J.: Converting and annotating quantitative data tables (2010) 0.01
    0.007721644 = product of:
      0.030886576 = sum of:
        0.030886576 = product of:
          0.046329863 = sum of:
            0.016645188 = weight(_text_:science in 4705) [ClassicSimilarity], result of:
              0.016645188 = score(doc=4705,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
            0.029684676 = weight(_text_:29 in 4705) [ClassicSimilarity], result of:
              0.029684676 = score(doc=4705,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 4705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4705)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    29. 7.2011 14:44:56
    Series
    Lecture notes in computer science; 6496

Authors

Years

Languages

  • e 50
  • d 7

Types

  • a 30
  • p 2
  • r 2
  • x 2
  • n 1
  • s 1
  • More… Less…