Search (89 results, page 2 of 5)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"el"
  1. Priss, U.: Description logic and faceted knowledge representation (1999) 0.02
    0.015395639 = product of:
      0.030791279 = sum of:
        0.01029941 = weight(_text_:information in 2655) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2655,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2655)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
              0.04098374 = score(doc=2655,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2655)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  2. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.01
    0.014161401 = product of:
      0.056645606 = sum of:
        0.056645606 = product of:
          0.11329121 = sum of:
            0.11329121 = weight(_text_:organization in 537) [ClassicSimilarity], result of:
              0.11329121 = score(doc=537,freq=8.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.6302719 = fieldWeight in 537, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0625 = fieldNorm(doc=537)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  3. Knowledge graphs : new directions for knowledge representation on the Semantic Web (2019) 0.01
    0.0138311 = product of:
      0.0553244 = sum of:
        0.0553244 = weight(_text_:standards in 51) [ClassicSimilarity], result of:
          0.0553244 = score(doc=51,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 51, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=51)
      0.25 = coord(1/4)
    
    Abstract
    The increasingly pervasive nature of the Web, expanding to devices and things in everydaylife, along with new trends in Artificial Intelligence call for new paradigms and a new look onKnowledge Representation and Processing at scale for the Semantic Web. The emerging, but stillto be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphorfor this current status of Semantic Web research. More than two decades of Semantic Webresearch provides a solid basis and a promising technology and standards stack to interlink data,ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphsas such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises- while often inspired by - limited to the core Semantic Web stack. This report documents theprogram and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions forKnowledge Representation on the Semantic Web", where a group of experts from academia andindustry discussed fundamental questions around these topics for a week in early September 2018,including the following: what are knowledge graphs? Which applications do we see to emerge?Which open research questions still need be addressed and which technology gaps still need tobe closed?
  4. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.01
    0.013142297 = product of:
      0.026284594 = sum of:
        0.008582841 = weight(_text_:information in 5864) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5864,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 5864) [ClassicSimilarity], result of:
              0.035403505 = score(doc=5864,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 5864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5864)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
  5. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.01
    0.009681771 = product of:
      0.038727082 = sum of:
        0.038727082 = weight(_text_:standards in 53) [ClassicSimilarity], result of:
          0.038727082 = score(doc=53,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.17234999 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.25 = coord(1/4)
    
    Content
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
  6. Schmitz-Esser, W.; Sigel, A.: Introducing terminology-based ontologies : Papers and Materials presented by the authors at the workshop "Introducing Terminology-based Ontologies" (Poli/Schmitz-Esser/Sigel) at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006 (2006) 0.01
    0.009198101 = product of:
      0.036792405 = sum of:
        0.036792405 = product of:
          0.07358481 = sum of:
            0.07358481 = weight(_text_:organization in 1285) [ClassicSimilarity], result of:
              0.07358481 = score(doc=1285,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.40937364 = fieldWeight in 1285, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1285)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This work-in-progress communication contains the papers and materials presented by Winfried Schmitz-Esser and Alexander Sigel in the joint workshop (with Roberto Poli) "Introducing Terminology-based Ontologies" at the 9th International Conference of the International Society for Knowledge Organization (ISKO), Vienna, Austria, July 6th, 2006.
    Content
    Inhalt: 1. From traditional Knowledge Organization Systems (authority files, classifications, thesauri) towards ontologies on the web (Alexander Sigel) (Tutorial. Paper with Slides interspersed) pp. 3-53 2. Introduction to Integrative Cross-Language Ontology (ICLO): Formalizing and interrelating textual knowledge to enable intelligent action and knowledge sharing (Winfried Schmitz-Esser) pp. 54-113 3. First Idea Sketch on Modelling ICLO with Topic Maps (Alexander Sigel) (Work in progress paper. Topic maps available from the author) pp. 114-130
  7. SKOS2OWL : Online tool for deriving OWL ontologies from SKOS categorization schemas (2007) 0.01
    0.008850876 = product of:
      0.035403505 = sum of:
        0.035403505 = product of:
          0.07080701 = sum of:
            0.07080701 = weight(_text_:organization in 4691) [ClassicSimilarity], result of:
              0.07080701 = score(doc=4691,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.39391994 = fieldWeight in 4691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4691)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    SKOS2OWL is an online tool that converts hierarchical classifications available in the W3C SKOS (Simple Knowledge Organization Systems) format into RDF-S or OWL ontologies. In many cases, the resulting ontologies can be used directly. If not, they can be refined using standard ontology engineering tools like e.g. Protégé.
  8. Panzer, M.: Towards the "webification" of controlled subject vocabulary : a case study involving the Dewey Decimal Classification (2007) 0.01
    0.00876192 = product of:
      0.03504768 = sum of:
        0.03504768 = product of:
          0.07009536 = sum of:
            0.07009536 = weight(_text_:organization in 538) [ClassicSimilarity], result of:
              0.07009536 = score(doc=538,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38996086 = fieldWeight in 538, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=538)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  9. OWL Web Ontology Language Test Cases (2004) 0.01
    0.0068306234 = product of:
      0.027322493 = sum of:
        0.027322493 = product of:
          0.054644987 = sum of:
            0.054644987 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
              0.054644987 = score(doc=4685,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.30952093 = fieldWeight in 4685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4685)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 8.2011 13:33:22
  10. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.01
    0.006717136 = product of:
      0.026868545 = sum of:
        0.026868545 = weight(_text_:information in 4329) [ClassicSimilarity], result of:
          0.026868545 = score(doc=4329,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3035872 = fieldWeight in 4329, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
      0.25 = coord(1/4)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Source
    Lernen - Wissen - Adaption : workshop proceedings / LWA 2007, Halle, September 2007. Martin Luther University Halle-Wittenberg, Institute for Informatics, Databases and Information Systems. Hrsg.: Alexander Hinneburg
  11. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.01
    0.006007989 = product of:
      0.024031956 = sum of:
        0.024031956 = weight(_text_:information in 4121) [ClassicSimilarity], result of:
          0.024031956 = score(doc=4121,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27153665 = fieldWeight in 4121, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
      0.25 = coord(1/4)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
  12. Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021) 0.01
    0.006007989 = product of:
      0.024031956 = sum of:
        0.024031956 = weight(_text_:information in 667) [ClassicSimilarity], result of:
          0.024031956 = score(doc=667,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27153665 = fieldWeight in 667, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=667)
      0.25 = coord(1/4)
    
    Abstract
    We present an overview of the NTCIR Math Tasks organized during NTCIR-10, 11, and 12. These tasks are primarily dedicated to techniques for searching mathematical content with formula expressions. In this chapter, we first summarize the task design and introduce test collections generated in the tasks. We also describe the features and main challenges of mathematical information retrieval systems and discuss future perspectives in the field.
    Series
    ¬The Information retrieval series, vol 43
    Source
    Evaluating information retrieval and access tasks. Eds.: Sakai, T., Oard, D., Kando, N. [https://doi.org/10.1007/978-981-15-5554-1_12]
  13. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.01
    0.0056770155 = product of:
      0.022708062 = sum of:
        0.022708062 = weight(_text_:information in 231) [ClassicSimilarity], result of:
          0.022708062 = score(doc=231,freq=14.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.256578 = fieldWeight in 231, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
      0.25 = coord(1/4)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
  14. Giunchiglia, F.; Zaihrayeu, I.; Farazi, F.: Converting classifications into OWL ontologies (2009) 0.01
    0.0053105257 = product of:
      0.021242103 = sum of:
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 4690) [ClassicSimilarity], result of:
              0.042484205 = score(doc=4690,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 4690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4690)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Classification schemes, such as the DMoZ web directory, provide a convenient and intuitive way for humans to access classified contents. While being easy to be dealt with for humans, classification schemes remain hard to be reasoned about by automated software agents. Among other things, this hardness is conditioned by the ambiguous na- ture of the natural language used to describe classification categories. In this paper we describe how classification schemes can be converted into OWL ontologies, thus enabling reasoning on them by Semantic Web applications. The proposed solution is based on a two phase approach in which category names are first encoded in a concept language and then, together with the structure of the classification scheme, are converted into an OWL ontology. We demonstrate the practical applicability of our approach by showing how the results of reasoning on these OWL ontologies can help improve the organization and use of web directories.
  15. SKOS Simple Knowledge Organization System Primer (2009) 0.01
    0.0053105257 = product of:
      0.021242103 = sum of:
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 4795) [ClassicSimilarity], result of:
              0.042484205 = score(doc=4795,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 4795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4795)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  16. Pepper, S.: ¬The TAO of topic maps : finding the way in the age of infoglut (2002) 0.01
    0.0052030715 = product of:
      0.020812286 = sum of:
        0.020812286 = weight(_text_:information in 4724) [ClassicSimilarity], result of:
          0.020812286 = score(doc=4724,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23515764 = fieldWeight in 4724, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4724)
      0.25 = coord(1/4)
    
    Abstract
    Topic maps are a new ISO standard for describing knowledge structures and associating them with information resources. As such they constitute an enabling technology for knowledge management. Dubbed "the GPS of the information universe", topic maps are also destined to provide powerful new ways of navigating large and interconnected corpora. While it is possible to represent immensely complex structures using topic maps, the basic concepts of the model - Topics, Associations, and Occurrences (TAO) - are easily grasped. This paper provides a non-technical introduction to these and other concepts (the IFS and BUTS of topic maps), relating them to things that are familiar to all of us from the realms of publishing and information management, and attempting to convey some idea of the uses to which topic maps will be put in the future.
  17. Bittner, T.: ¬An introduction to formal ontology and how it can facilitate semantic interoperability (2005) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 3272) [ClassicSimilarity], result of:
          0.02059882 = score(doc=3272,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 3272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3272)
      0.25 = coord(1/4)
    
    Abstract
    This course gives an introduction to formal ontology and how it can be used to facilitate semantic interoperability of (Geographic) Information Systems.
  18. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
          0.02059882 = score(doc=3829,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 3829, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.25 = coord(1/4)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Source
    Information Systems. 37(2012) no. 4, S.294-305
  19. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 5365) [ClassicSimilarity], result of:
          0.02059882 = score(doc=5365,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.25 = coord(1/4)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  20. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.01
    0.0051229675 = product of:
      0.02049187 = sum of:
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
              0.04098374 = score(doc=4649,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 4649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4649)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 13:40:22

Years

Types

  • a 39
  • n 6
  • p 2
  • x 2
  • r 1
  • s 1
  • More… Less…