Search (329 results, page 1 of 17)

  • × language_ss:"e"
  • × theme_ss:"Wissensrepräsentation"
  • × type_ss:"a"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.38
    0.38290185 = product of:
      0.5105358 = sum of:
        0.06558679 = product of:
          0.19676036 = sum of:
            0.19676036 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19676036 = score(doc=400,freq=2.0), product of:
                0.35009617 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041294612 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.19676036 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19676036 = score(doc=400,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.025667597 = weight(_text_:use in 400) [ClassicSimilarity], result of:
          0.025667597 = score(doc=400,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.016396983 = weight(_text_:of in 400) [ClassicSimilarity], result of:
          0.016396983 = score(doc=400,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 400, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.19676036 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19676036 = score(doc=400,freq=2.0), product of:
            0.35009617 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041294612 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 400) [ClassicSimilarity], result of:
              0.018727465 = score(doc=400,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 400, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.75 = coord(6/8)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  2. Aitken, S.; Reid, S.: Evaluation of an ontology-based information retrieval tool (2000) 0.07
    0.071144685 = product of:
      0.14228937 = sum of:
        0.066795126 = weight(_text_:retrieval in 2862) [ClassicSimilarity], result of:
          0.066795126 = score(doc=2862,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5347345 = fieldWeight in 2862, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2862)
        0.03422346 = weight(_text_:use in 2862) [ClassicSimilarity], result of:
          0.03422346 = score(doc=2862,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.27065295 = fieldWeight in 2862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0625 = fieldNorm(doc=2862)
        0.023614356 = weight(_text_:of in 2862) [ClassicSimilarity], result of:
          0.023614356 = score(doc=2862,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.36569026 = fieldWeight in 2862, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2862)
        0.017656423 = product of:
          0.035312846 = sum of:
            0.035312846 = weight(_text_:on in 2862) [ClassicSimilarity], result of:
              0.035312846 = score(doc=2862,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.3888053 = fieldWeight in 2862, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2862)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper evaluates the use of an explicit domain ontology in an information retrieval tool. The evaluation compares the performance of ontology-enhanced retrieval with keyword retrieval for a fixed set of queries across several data sets. The robustness of the IR approach is assessed by comparing the performance of the tool on the original data set with that on previously unseen data.
    Content
    Beitrag für: Workshop on the Applications of Ontologies and Problem-Solving Methods, (eds) Gómez-Pérez, A., Benjamins, V.R., Guarino, N., and Uschold, M. European Conference on Artificial Intelligence 2000, Berlin.
  3. Giri, K.; Gokhale, P.: Developing a banking service ontology using Protégé, an open source software (2015) 0.06
    0.0641703 = product of:
      0.10267249 = sum of:
        0.020873476 = weight(_text_:retrieval in 2793) [ClassicSimilarity], result of:
          0.020873476 = score(doc=2793,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.021389665 = weight(_text_:use in 2793) [ClassicSimilarity], result of:
          0.021389665 = score(doc=2793,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 2793, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.019324033 = weight(_text_:of in 2793) [ClassicSimilarity], result of:
          0.019324033 = score(doc=2793,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2992506 = fieldWeight in 2793, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2793)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 2793) [ClassicSimilarity], result of:
              0.01911364 = score(doc=2793,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 2793, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
        0.031528484 = product of:
          0.06305697 = sum of:
            0.06305697 = weight(_text_:computers in 2793) [ClassicSimilarity], result of:
              0.06305697 = score(doc=2793,freq=2.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29044062 = fieldWeight in 2793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2793)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    Computers have transformed from single isolated devices to entry points into a worldwide network of information exchange. Consequently, support in the exchange of data, information, and knowledge is becoming the key issue in computer technology today. The increasing volume of data available on the Web makes information retrieval a tedious and difficult task. Researchers are now exploring the possibility of creating a semantic web, in which meaning is made explicit, allowing machines to process and integrate web resources intelligently. The vision of the semantic web introduces the next generation of the Web by establishing a layer of machine-understandable data. The success of the semantic web depends on the easy creation, integration and use of semantic data, which will depend on web ontology. The faceted approach towards analyzing and representing knowledge given by S R Ranganathan would be useful in this regard. Ontology development in different fields is one such area where this approach given by Ranganathan could be applied. This paper presents a case of developing ontology for the field of banking.
    Source
    Annals of library and information studies. 62(2015) no.4, S.281-285
  4. Padmavathi, T.; Krishnamurthy, M.: Ontological representation of knowledge for developing information services in food science and technology (2012) 0.06
    0.063973054 = product of:
      0.17059481 = sum of:
        0.050096344 = weight(_text_:retrieval in 839) [ClassicSimilarity], result of:
          0.050096344 = score(doc=839,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40105087 = fieldWeight in 839, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=839)
        0.021168415 = weight(_text_:of in 839) [ClassicSimilarity], result of:
          0.021168415 = score(doc=839,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.32781258 = fieldWeight in 839, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=839)
        0.099330045 = sum of:
          0.013242318 = weight(_text_:on in 839) [ClassicSimilarity], result of:
            0.013242318 = score(doc=839,freq=2.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.14580199 = fieldWeight in 839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.046875 = fieldNorm(doc=839)
          0.086087726 = weight(_text_:line in 839) [ClassicSimilarity], result of:
            0.086087726 = score(doc=839,freq=2.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.37175092 = fieldWeight in 839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.046875 = fieldNorm(doc=839)
      0.375 = coord(3/8)
    
    Abstract
    Knowledge explosion in various fields during recent years has resulted in the creation of vast amounts of on-line scientific literature. Food Science &Technology (FST) is also an important subject domain where rapid developments are taking place due to diverse research and development activities. As a result, information storage and retrieval has become very complex and current information retrieval systems (IRs) are being challenged in terms of both adequate precision and response time. To overcome these limitations as well as to provide naturallanguage based effective retrieval, a suitable knowledge engineering framework needs to be applied to represent, share and discover information. Semantic web technologies provide mechanisms for creating knowledge bases, ontologies and rules for handling data that promise to improve the quality of information retrieval. Ontologies are the backbone of such knowledge systems. This paper presents a framework for semantic representation of a large repository of content in the domain of FST.
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
  5. Paralic, J.; Kostial, I.: Ontology-based information retrieval (2003) 0.06
    0.06317932 = product of:
      0.12635864 = sum of:
        0.06534432 = weight(_text_:retrieval in 1153) [ClassicSimilarity], result of:
          0.06534432 = score(doc=1153,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5231199 = fieldWeight in 1153, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1153)
        0.029945528 = weight(_text_:use in 1153) [ClassicSimilarity], result of:
          0.029945528 = score(doc=1153,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 1153, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1153)
        0.015619429 = weight(_text_:of in 1153) [ClassicSimilarity], result of:
          0.015619429 = score(doc=1153,freq=8.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 1153, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1153)
        0.01544937 = product of:
          0.03089874 = sum of:
            0.03089874 = weight(_text_:on in 1153) [ClassicSimilarity], result of:
              0.03089874 = score(doc=1153,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.34020463 = fieldWeight in 1153, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1153)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In the proposed article a new, ontology-based approach to information retrieval (IR) is presented. The system is based on a domain knowledge representation schema in form of ontology. New resources registered within the system are linked to concepts from this ontology. In such a way resources may be retrieved based on the associations and not only based on partial or exact term matching as the use of vector model presumes In order to evaluate the quality of this retrieval mechanism, experiments to measure retrieval efficiency have been performed with well-known Cystic Fibrosis collection of medical scientific papers. The ontology-based retrieval mechanism has been compared with traditional full text search based on vector IR model as well as with the Latent Semantic Indexing method.
  6. MacFarlane, A.; Missaoui, S.; Frankowska-Takhari, S.: On machine learning and knowledge organization in multimedia information retrieval (2020) 0.06
    0.059556153 = product of:
      0.119112305 = sum of:
        0.04174695 = weight(_text_:retrieval in 5732) [ClassicSimilarity], result of:
          0.04174695 = score(doc=5732,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33420905 = fieldWeight in 5732, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.047828745 = weight(_text_:use in 5732) [ClassicSimilarity], result of:
          0.047828745 = score(doc=5732,freq=10.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.37824902 = fieldWeight in 5732, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.01850135 = weight(_text_:of in 5732) [ClassicSimilarity], result of:
          0.01850135 = score(doc=5732,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.28651062 = fieldWeight in 5732, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5732)
        0.0110352645 = product of:
          0.022070529 = sum of:
            0.022070529 = weight(_text_:on in 5732) [ClassicSimilarity], result of:
              0.022070529 = score(doc=5732,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.24300331 = fieldWeight in 5732, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5732)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Recent technological developments have increased the use of machine learning to solve many problems, including many in information retrieval. Multimedia information retrieval as a problem represents a significant challenge to machine learning as a technological solution, but some problems can still be addressed by using appropriate AI techniques. We review the technological developments and provide a perspective on the use of machine learning in conjunction with knowledge organization to address multimedia IR needs. The semantic gap in multimedia IR remains a significant problem in the field, and solutions to them are many years off. However, new technological developments allow the use of knowledge organization and machine learning in multimedia search systems and services. Specifically, we argue that, the improvement of detection of some classes of lowlevel features in images music and video can be used in conjunction with knowledge organization to tag or label multimedia content for better retrieval performance. We provide an overview of the use of knowledge organization schemes in machine learning and make recommendations to information professionals on the use of this technology with knowledge organization techniques to solve multimedia IR problems. We introduce a five-step process model that extracts features from multimedia objects (Step 1) from both knowledge organization (Step 1a) and machine learning (Step 1b), merging them together (Step 2) to create an index of those multimedia objects (Step 3). We also overview further steps in creating an application to utilize the multimedia objects (Step 4) and maintaining and updating the database of features on those objects (Step 5).
  7. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.06
    0.056326646 = product of:
      0.09012263 = sum of:
        0.020873476 = weight(_text_:retrieval in 633) [ClassicSimilarity], result of:
          0.020873476 = score(doc=633,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.021389665 = weight(_text_:use in 633) [ClassicSimilarity], result of:
          0.021389665 = score(doc=633,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.02431554 = weight(_text_:of in 633) [ClassicSimilarity], result of:
          0.02431554 = score(doc=633,freq=38.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.37654874 = fieldWeight in 633, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=633)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 633) [ClassicSimilarity], result of:
              0.01911364 = score(doc=633,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 633, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
        0.013987125 = product of:
          0.02797425 = sum of:
            0.02797425 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
              0.02797425 = score(doc=633,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19345059 = fieldWeight in 633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=633)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  8. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.06
    0.055867687 = product of:
      0.111735374 = sum of:
        0.058445733 = weight(_text_:retrieval in 4329) [ClassicSimilarity], result of:
          0.058445733 = score(doc=4329,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.46789268 = fieldWeight in 4329, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.029945528 = weight(_text_:use in 4329) [ClassicSimilarity], result of:
          0.029945528 = score(doc=4329,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 4329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.015619429 = weight(_text_:of in 4329) [ClassicSimilarity], result of:
          0.015619429 = score(doc=4329,freq=8.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 4329, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4329)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 4329) [ClassicSimilarity], result of:
              0.01544937 = score(doc=4329,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 4329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
  9. Gladun, A.; Rogushina, J.: Development of domain thesaurus as a set of ontology concepts with use of semantic similarity and elements of combinatorial optimization (2021) 0.06
    0.055267893 = product of:
      0.110535786 = sum of:
        0.029222867 = weight(_text_:retrieval in 572) [ClassicSimilarity], result of:
          0.029222867 = score(doc=572,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.042349376 = weight(_text_:use in 572) [ClassicSimilarity], result of:
          0.042349376 = score(doc=572,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.33491597 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.031238858 = weight(_text_:of in 572) [ClassicSimilarity], result of:
          0.031238858 = score(doc=572,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.48376274 = fieldWeight in 572, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=572)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 572) [ClassicSimilarity], result of:
              0.01544937 = score(doc=572,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=572)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    We consider use of ontological background knowledge in intelligent information systems and analyze directions of their reduction in compliance with specifics of particular user task. Such reduction is aimed at simplification of knowledge processing without loss of significant information. We propose methods of generation of task thesauri based on domain ontology that contain such subset of ontological concepts and relations that can be used in task solving. Combinatorial optimization is used for minimization of task thesaurus. In this approach, semantic similarity estimates are used for determination of concept significance for user task. Some practical examples of optimized thesauri application for semantic retrieval and competence analysis demonstrate efficiency of proposed approach.
  10. Teskey, F.N.: Enriched knowledge representation for information retrieval (1987) 0.06
    0.055187456 = product of:
      0.11037491 = sum of:
        0.050615493 = weight(_text_:retrieval in 698) [ClassicSimilarity], result of:
          0.050615493 = score(doc=698,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.40520695 = fieldWeight in 698, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=698)
        0.029945528 = weight(_text_:use in 698) [ClassicSimilarity], result of:
          0.029945528 = score(doc=698,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=698)
        0.022089208 = weight(_text_:of in 698) [ClassicSimilarity], result of:
          0.022089208 = score(doc=698,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34207192 = fieldWeight in 698, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=698)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 698) [ClassicSimilarity], result of:
              0.01544937 = score(doc=698,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 698, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=698)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In this paper we identify the need for a new theory of information. An information model is developed which distinguishes between data, as directly observable facts, information, as structured collections of data, and knowledge as methods of using information. The model is intended to support a wide range of information systems. In the paper we develop the use of the model for a semantic information retrieval system using the concept of semantic categories. The likely benefits of this area discussed, though as yet no detailed evaluation has been conducted.
    Source
    SIGIR'87: Proceedings of the 10th annual international ACM SIGIR conference on Research and development in information retrieval
  11. Thenmalar, S.; Geetha, T.V.: Enhanced ontology-based indexing and searching (2014) 0.05
    0.05175262 = product of:
      0.08280419 = sum of:
        0.025307747 = weight(_text_:retrieval in 1633) [ClassicSimilarity], result of:
          0.025307747 = score(doc=1633,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.20260347 = fieldWeight in 1633, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.021174688 = weight(_text_:use in 1633) [ClassicSimilarity], result of:
          0.021174688 = score(doc=1633,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.16745798 = fieldWeight in 1633, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.017894302 = weight(_text_:of in 1633) [ClassicSimilarity], result of:
          0.017894302 = score(doc=1633,freq=42.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2771099 = fieldWeight in 1633, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1633)
        0.008636461 = product of:
          0.017272921 = sum of:
            0.017272921 = weight(_text_:on in 1633) [ClassicSimilarity], result of:
              0.017272921 = score(doc=1633,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19018018 = fieldWeight in 1633, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1633)
          0.5 = coord(1/2)
        0.009790987 = product of:
          0.019581974 = sum of:
            0.019581974 = weight(_text_:22 in 1633) [ClassicSimilarity], result of:
              0.019581974 = score(doc=1633,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.1354154 = fieldWeight in 1633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1633)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Abstract
    Purpose - The purpose of this paper is to improve the conceptual-based search by incorporating structural ontological information such as concepts and relations. Generally, Semantic-based information retrieval aims to identify relevant information based on the meanings of the query terms or on the context of the terms and the performance of semantic information retrieval is carried out through standard measures-precision and recall. Higher precision leads to the (meaningful) relevant documents obtained and lower recall leads to the less coverage of the concepts. Design/methodology/approach - In this paper, the authors enhance the existing ontology-based indexing proposed by Kohler et al., by incorporating sibling information to the index. The index designed by Kohler et al., contains only super and sub-concepts from the ontology. In addition, in our approach, we focus on two tasks; query expansion and ranking of the expanded queries, to improve the efficiency of the ontology-based search. The aforementioned tasks make use of ontological concepts, and relations existing between those concepts so as to obtain semantically more relevant search results for a given query. Findings - The proposed ontology-based indexing technique is investigated by analysing the coverage of concepts that are being populated in the index. Here, we introduce a new measure called index enhancement measure, to estimate the coverage of ontological concepts being indexed. We have evaluated the ontology-based search for the tourism domain with the tourism documents and tourism-specific ontology. The comparison of search results based on the use of ontology "with and without query expansion" is examined to estimate the efficiency of the proposed query expansion task. The ranking is compared with the ORank system to evaluate the performance of our ontology-based search. From these analyses, the ontology-based search results shows better recall when compared to the other concept-based search systems. The mean average precision of the ontology-based search is found to be 0.79 and the recall is found to be 0.65, the ORank system has the mean average precision of 0.62 and the recall is found to be 0.51, while the concept-based search has the mean average precision of 0.56 and the recall is found to be 0.42. Practical implications - When the concept is not present in the domain-specific ontology, the concept cannot be indexed. When the given query term is not available in the ontology then the term-based results are retrieved. Originality/value - In addition to super and sub-concepts, we incorporate the concepts present in same level (siblings) to the ontological index. The structural information from the ontology is determined for the query expansion. The ranking of the documents depends on the type of the query (single concept query, multiple concept queries and concept with relation queries) and the ontological relations that exists in the query and the documents. With this ontological structural information, the search results showed us better coverage of concepts with respect to the query.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.6, S.678-696
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Green, R.: See-also relationships in the Dewey Decimal Classification (2011) 0.05
    0.048353486 = product of:
      0.09670697 = sum of:
        0.029222867 = weight(_text_:retrieval in 4615) [ClassicSimilarity], result of:
          0.029222867 = score(doc=4615,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4615)
        0.029945528 = weight(_text_:use in 4615) [ClassicSimilarity], result of:
          0.029945528 = score(doc=4615,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 4615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4615)
        0.022089208 = weight(_text_:of in 4615) [ClassicSimilarity], result of:
          0.022089208 = score(doc=4615,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34207192 = fieldWeight in 4615, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4615)
        0.01544937 = product of:
          0.03089874 = sum of:
            0.03089874 = weight(_text_:on in 4615) [ClassicSimilarity], result of:
              0.03089874 = score(doc=4615,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.34020463 = fieldWeight in 4615, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4615)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper investigates the semantics of topical, associative see-also relationships in schedule and table entries of the Dewey Decimal Classification (DDC) system. Based on the see-also relationships in a random sample of 100 classes containing one or more of these relationships, a semi-structured inventory of sources of see-also relationships is generated, of which the most important are lexical similarity, complementarity, facet difference, and relational configuration difference. The premise that see-also relationships based on lexical similarity may be language-specific is briefly examined. The paper concludes with recommendations on the continued use of see-also relationships in the DDC.
    Content
    Papers from the Third North American Symposium on Knowledge Organization, June 16-17, Toronto, Canada.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  13. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.05
    0.047558818 = product of:
      0.12682351 = sum of:
        0.06534432 = weight(_text_:retrieval in 4121) [ClassicSimilarity], result of:
          0.06534432 = score(doc=4121,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5231199 = fieldWeight in 4121, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.042349376 = weight(_text_:use in 4121) [ClassicSimilarity], result of:
          0.042349376 = score(doc=4121,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.33491597 = fieldWeight in 4121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.019129815 = weight(_text_:of in 4121) [ClassicSimilarity], result of:
          0.019129815 = score(doc=4121,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.29624295 = fieldWeight in 4121, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
      0.375 = coord(3/8)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
  14. Maheswari, J.U.; Karpagam, G.R.: ¬A conceptual framework for ontology based information retrieval (2010) 0.05
    0.04718432 = product of:
      0.09436864 = sum of:
        0.046674512 = weight(_text_:retrieval in 702) [ClassicSimilarity], result of:
          0.046674512 = score(doc=702,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37365708 = fieldWeight in 702, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=702)
        0.021389665 = weight(_text_:use in 702) [ClassicSimilarity], result of:
          0.021389665 = score(doc=702,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=702)
        0.01850135 = weight(_text_:of in 702) [ClassicSimilarity], result of:
          0.01850135 = score(doc=702,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.28651062 = fieldWeight in 702, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=702)
        0.007803111 = product of:
          0.015606222 = sum of:
            0.015606222 = weight(_text_:on in 702) [ClassicSimilarity], result of:
              0.015606222 = score(doc=702,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.1718293 = fieldWeight in 702, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=702)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Improving Information retrieval by employing the use of ontologies to overcome the limitations of syntactic search has been one of the inspirations since its emergence. This paper proposes a conceptual framework to exploit ontology based Information retrieval. This framework constitutes of five phases namely Query parsing, word stemming, ontology matching, weight assignment, ranking and Information retrieval. In the first phase, the user query is parsed into sequence of words. The parsed contents are curtailed to identify the significant word by ignoring superfluous terms such as "to", "is","ed", "about" and the like in the stemming phase. The objective of the stemming phase is to throttle feature descriptors to root words, which in turn will increase efficiency; this reduces the time consumed for searching the superfluous terms, which may not significantly influence the effectiveness of the retrieval process. In the third phase ontology matching is carried out by matching the parsed words with the relevant terms in the existing ontology. If the ontology does not exist, it is recommended to generate the required ontology. In the fourth phase the weights are assigned based on the distance between the stemmed words and the terms in the ontology uses improved matchmaking algorithm. The range of weights varies from 0 to 1 based on the level of distance in the ontology (superclass-subclass). The aggregate weights are calculated for the all the combination of stemmed words. The combination with the highest score is ranked as the best and the corresponding information is retrieved. The conceptual workflow is illustrated with an e-governance case study Academic Information System.
    Source
    International Journal of Engineering Science and Technology. 2(2010), no.10, S.5679-5688
  15. Fischer, D.H.: From thesauri towards ontologies? (1998) 0.05
    0.046821814 = product of:
      0.09364363 = sum of:
        0.035423465 = weight(_text_:retrieval in 2176) [ClassicSimilarity], result of:
          0.035423465 = score(doc=2176,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 2176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
        0.025667597 = weight(_text_:use in 2176) [ClassicSimilarity], result of:
          0.025667597 = score(doc=2176,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 2176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
        0.023188837 = weight(_text_:of in 2176) [ClassicSimilarity], result of:
          0.023188837 = score(doc=2176,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3591007 = fieldWeight in 2176, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2176)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 2176) [ClassicSimilarity], result of:
              0.018727465 = score(doc=2176,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 2176, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2176)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The ISO 2788 guidelines for monolingual thesauri contain a differentiation of "the hierarchical relationship" into "generic", "partitive", and "instance", which, for purposes of document retrieval, was deemed adequate. However, ontologies, designed as language inventories for a wider scope of knowledge representation, are based on all these and some more logical differentiations. Rereading the ISO 2788 standard and inspecting the published Cyc Upper Ontology, it is argued that the adoption of the document-retrieval definition of subsumption generally prevents the conception or use of a thesaurus as a substructure of an ontology of the new kind as constructed for AI applications. When a thesaurus is used for fact description and inference on fact descriptions, the instance-of relationship too should be reconsidered: It may also link concepts and metaconcepts, and then its distinction from subsumption is needed. The treatment of the instance-of relationship in thesauri, the Cyc Upper Ontology, and WordNet is described from this perspective
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  16. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.04
    0.04341149 = product of:
      0.08682298 = sum of:
        0.036299463 = weight(_text_:use in 3387) [ClassicSimilarity], result of:
          0.036299463 = score(doc=3387,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.2870708 = fieldWeight in 3387, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.018933605 = weight(_text_:of in 3387) [ClassicSimilarity], result of:
          0.018933605 = score(doc=3387,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 3387, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.014805362 = product of:
          0.029610723 = sum of:
            0.029610723 = weight(_text_:on in 3387) [ClassicSimilarity], result of:
              0.029610723 = score(doc=3387,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.32602316 = fieldWeight in 3387, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
        0.016784549 = product of:
          0.033569098 = sum of:
            0.033569098 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
              0.033569098 = score(doc=3387,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.23214069 = fieldWeight in 3387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3387)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  17. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.04
    0.04230194 = product of:
      0.08460388 = sum of:
        0.04174695 = weight(_text_:retrieval in 5864) [ClassicSimilarity], result of:
          0.04174695 = score(doc=5864,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33420905 = fieldWeight in 5864, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.021389665 = weight(_text_:use in 5864) [ClassicSimilarity], result of:
          0.021389665 = score(doc=5864,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.013664153 = weight(_text_:of in 5864) [ClassicSimilarity], result of:
          0.013664153 = score(doc=5864,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.21160212 = fieldWeight in 5864, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
        0.007803111 = product of:
          0.015606222 = sum of:
            0.015606222 = weight(_text_:on in 5864) [ClassicSimilarity], result of:
              0.015606222 = score(doc=5864,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.1718293 = fieldWeight in 5864, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5864)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
    Content
    Vgl.: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval. Vgl. auch: http://semantic-web-journal.net/content/similarity-based-knowledge-graph-queries-recommendation-retrieval-1.
  18. Gray, A.J.G.; Gray, N.; Hall, C.W.; Ounis, I.: Finding the right term : retrieving and exploring semantic concepts in astronomical vocabularies (2010) 0.04
    0.04210669 = product of:
      0.08421338 = sum of:
        0.020873476 = weight(_text_:retrieval in 4235) [ClassicSimilarity], result of:
          0.020873476 = score(doc=4235,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 4235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4235)
        0.037047986 = weight(_text_:use in 4235) [ClassicSimilarity], result of:
          0.037047986 = score(doc=4235,freq=6.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.29299045 = fieldWeight in 4235, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4235)
        0.0167351 = weight(_text_:of in 4235) [ClassicSimilarity], result of:
          0.0167351 = score(doc=4235,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25915858 = fieldWeight in 4235, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4235)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 4235) [ClassicSimilarity], result of:
              0.01911364 = score(doc=4235,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 4235, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4235)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Astronomy, like many domains, already has several sets of terminology in general use, referred to as controlled vocabularies. For example, the keywords for tagging journal articles, or the taxonomy of terms used to label image files. These existing vocabularies can be encoded into skos, a W3C proposed recommendation for representing vocabularies on the Semantic Web, so that computer systems can help users to search for and discover resources tagged with vocabulary concepts. However, this requires a search mechanism to go from a user-supplied string to a vocabulary concept. In this paper, we present our experiences in implementing the Vocabulary Explorer, a vocabulary search service based on the Terrier Information Retrieval Platform. We investigate the capabilities of existing document weighting models for identifying the correct vocabulary concept for a query. Due to the highly structured nature of a skos encoded vocabulary, we investigate the effects of term weighting (boosting the score of concepts that match on particular fields of a vocabulary concept), and query expansion. We found that the existing document weighting models provided very high quality results, but these could be improved further with the use of term weighting that makes use of the semantic evidence.
  19. Köhler, J.; Philippi, S.; Specht, M.; Rüegg, A.: Ontology based text indexing and querying for the semantic web (2006) 0.04
    0.041463543 = product of:
      0.082927085 = sum of:
        0.029519552 = weight(_text_:retrieval in 3280) [ClassicSimilarity], result of:
          0.029519552 = score(doc=3280,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23632148 = fieldWeight in 3280, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3280)
        0.030249555 = weight(_text_:use in 3280) [ClassicSimilarity], result of:
          0.030249555 = score(doc=3280,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23922569 = fieldWeight in 3280, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3280)
        0.017640345 = weight(_text_:of in 3280) [ClassicSimilarity], result of:
          0.017640345 = score(doc=3280,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 3280, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3280)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 3280) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=3280,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This publication shows how the gap between the HTML based internet and the RDF based vision of the semantic web might be bridged, by linking words in texts to concepts of ontologies. Most current search engines use indexes that are built at the syntactical level and return hits based on simple string comparisons. However, the indexes do not contain synonyms, cannot differentiate between homonyms ('mouse' as a pointing vs. 'mouse' as an animal) and users receive different search results when they use different conjugation forms of the same word. In this publication, we present a system that uses ontologies and Natural Language Processing techniques to index texts, and thus supports word sense disambiguation and the retrieval of texts that contain equivalent words, by indexing them to concepts of ontologies. For this purpose, we developed fully automated methods for mapping equivalent concepts of imported RDF ontologies (for this prototype WordNet, SUMO and OpenCyc). These methods will thus allow the seamless integration of domain specific ontologies for concept based information retrieval in different domains. To demonstrate the practical workability of this approach, a set of web pages that contain synonyms and homonyms were indexed and can be queried via a search engine like query frontend. However, the ontology based indexing approach can also be used for other data mining applications such text clustering, relation mining and for searching free text fields in biological databases. The ontology alignment methods and some of the text mining principles described in this publication are now incorporated into the ONDEX system http://ondex.sourceforge.net/.
  20. Jimeno-Yepes, A.; Berlanga Llavori, R.; Rebholz-Schuhmann, D.: Ontology refinement for improved information retrieval (2010) 0.04
    0.040320404 = product of:
      0.10752107 = sum of:
        0.058445733 = weight(_text_:retrieval in 4234) [ClassicSimilarity], result of:
          0.058445733 = score(doc=4234,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.46789268 = fieldWeight in 4234, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4234)
        0.029945528 = weight(_text_:use in 4234) [ClassicSimilarity], result of:
          0.029945528 = score(doc=4234,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 4234, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4234)
        0.019129815 = weight(_text_:of in 4234) [ClassicSimilarity], result of:
          0.019129815 = score(doc=4234,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.29624295 = fieldWeight in 4234, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4234)
      0.375 = coord(3/8)
    
    Abstract
    Ontologies are frequently used in information retrieval being their main applications the expansion of queries, semantic indexing of documents and the organization of search results. Ontologies provide lexical items, allow conceptual normalization and provide different types of relations. However, the optimization of an ontology to perform information retrieval tasks is still unclear. In this paper, we use an ontology query model to analyze the usefulness of ontologies in effectively performing document searches. Moreover, we propose an algorithm to refine ontologies for information retrieval tasks with preliminary positive results.

Years

Types

  • el 61
  • p 1
  • x 1
  • More… Less…