Search (283 results, page 1 of 15)

  • × theme_ss:"Wissensrepräsentation"
  1. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.10
    0.09699771 = product of:
      0.19399542 = sum of:
        0.069634825 = weight(_text_:science in 3355) [ClassicSimilarity], result of:
          0.069634825 = score(doc=3355,freq=18.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.52385724 = fieldWeight in 3355, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.12436058 = sum of:
          0.0663457 = weight(_text_:network in 3355) [ClassicSimilarity], result of:
            0.0663457 = score(doc=3355,freq=2.0), product of:
              0.22473325 = queryWeight, product of:
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.050463587 = queryNorm
              0.29521978 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4533744 = idf(docFreq=1398, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.058014885 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.058014885 = score(doc=3355,freq=4.0), product of:
              0.17671488 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050463587 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(2/4)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
    Subject
    Science / Atlases
    Science / Study and teaching / Graphic methods
    Communication in science / Data processing
  2. Meng, K.; Ba, Z.; Ma, Y.; Li, G.: ¬A network coupling approach to detecting hierarchical linkages between science and technology (2024) 0.09
    0.090424344 = product of:
      0.120565794 = sum of:
        0.046423215 = weight(_text_:science in 1205) [ClassicSimilarity], result of:
          0.046423215 = score(doc=1205,freq=8.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.34923816 = fieldWeight in 1205, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.027229078 = weight(_text_:research in 1205) [ClassicSimilarity], result of:
          0.027229078 = score(doc=1205,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.18912788 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
        0.046913497 = product of:
          0.093826994 = sum of:
            0.093826994 = weight(_text_:network in 1205) [ClassicSimilarity], result of:
              0.093826994 = score(doc=1205,freq=4.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.41750383 = fieldWeight in 1205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Detecting science-technology hierarchical linkages is beneficial for understanding deep interactions between science and technology (S&T). Previous studies have mainly focused on linear linkages between S&T but ignored their structural linkages. In this paper, we propose a network coupling approach to inspect hierarchical interactions of S&T by integrating their knowledge linkages and structural linkages. S&T knowledge networks are first enhanced with bidirectional encoder representation from transformers (BERT) knowledge alignment, and then their hierarchical structures are identified based on K-core decomposition. Hierarchical coupling preferences and strengths of the S&T networks over time are further calculated based on similarities of coupling nodes' degree distribution and similarities of coupling edges' weight distribution. Extensive experimental results indicate that our approach is feasible and robust in identifying the coupling hierarchy with superior performance compared to other isomorphism and dissimilarity algorithms. Our research extends the mindset of S&T linkage measurement by identifying patterns and paths of the interaction of S&T hierarchical knowledge.
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.167-187
  3. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.08
    0.07526165 = product of:
      0.100348875 = sum of:
        0.05343304 = product of:
          0.1602991 = sum of:
            0.1602991 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.1602991 = score(doc=5820,freq=2.0), product of:
                0.42783085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050463587 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.015474406 = weight(_text_:science in 5820) [ClassicSimilarity], result of:
          0.015474406 = score(doc=5820,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.11641272 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.03144143 = weight(_text_:research in 5820) [ClassicSimilarity], result of:
          0.03144143 = score(doc=5820,freq=6.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.21838607 = fieldWeight in 5820, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.75 = coord(3/4)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
    Imprint
    Pittsburgh, PA : Carnegie Mellon University, School of Computer Science, Language Technologies Institute
  4. Hocker, J.; Schindler, C.; Rittberger, M.: Participatory design for ontologies : a case study of an open science ontology for qualitative coding schemas (2020) 0.07
    0.07431042 = product of:
      0.09908056 = sum of:
        0.040941432 = weight(_text_:science in 179) [ClassicSimilarity], result of:
          0.040941432 = score(doc=179,freq=14.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.30799913 = fieldWeight in 179, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.044464894 = weight(_text_:research in 179) [ClassicSimilarity], result of:
          0.044464894 = score(doc=179,freq=12.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.3088445 = fieldWeight in 179, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=179)
        0.013674239 = product of:
          0.027348477 = sum of:
            0.027348477 = weight(_text_:22 in 179) [ClassicSimilarity], result of:
              0.027348477 = score(doc=179,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.15476047 = fieldWeight in 179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=179)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose The open science movement calls for transparent and retraceable research processes. While infrastructures to support these practices in qualitative research are lacking, the design needs to consider different approaches and workflows. The paper bases on the definition of ontologies as shared conceptualizations of knowledge (Borst, 1999). The authors argue that participatory design is a good way to create these shared conceptualizations by giving domain experts and future users a voice in the design process via interviews, workshops and observations. Design/methodology/approach This paper presents a novel approach for creating ontologies in the field of open science using participatory design. As a case study the creation of an ontology for qualitative coding schemas is presented. Coding schemas are an important result of qualitative research, and reuse can yield great potential for open science making qualitative research more transparent, enhance sharing of coding schemas and teaching of qualitative methods. The participatory design process consisted of three parts: a requirement analysis using interviews and an observation, a design phase accompanied by interviews and an evaluation phase based on user tests as well as interviews. Findings The research showed several positive outcomes due to participatory design: higher commitment of users, mutual learning, high quality feedback and better quality of the ontology. However, there are two obstacles in this approach: First, contradictive answers by the interviewees, which needs to be balanced; second, this approach takes more time due to interview planning and analysis. Practical implications The implication of the paper is in the long run to decentralize the design of open science infrastructures and to involve parties affected on several levels. Originality/value In ontology design, several methods exist by using user-centered design or participatory design doing workshops. In this paper, the authors outline the potentials for participatory design using mainly interviews in creating an ontology for open science. The authors focus on close contact to researchers in order to build the ontology upon the expert's knowledge.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Showcasing Doctoral Research in Information Science.
  5. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.07
    0.071466416 = product of:
      0.09528856 = sum of:
        0.033503074 = weight(_text_:science in 4319) [ClassicSimilarity], result of:
          0.033503074 = score(doc=4319,freq=6.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.25204095 = fieldWeight in 4319, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.0226909 = weight(_text_:research in 4319) [ClassicSimilarity], result of:
          0.0226909 = score(doc=4319,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.15760657 = fieldWeight in 4319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.039094582 = product of:
          0.078189164 = sum of:
            0.078189164 = weight(_text_:network in 4319) [ClassicSimilarity], result of:
              0.078189164 = score(doc=4319,freq=4.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.34791988 = fieldWeight in 4319, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4319)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    LCSH
    Computer Science
    Computer network architectures
    Subject
    Computer Science
    Computer network architectures
  6. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.06
    0.061673023 = product of:
      0.082230695 = sum of:
        0.023211608 = weight(_text_:science in 2418) [ClassicSimilarity], result of:
          0.023211608 = score(doc=2418,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.03850773 = weight(_text_:research in 2418) [ClassicSimilarity], result of:
          0.03850773 = score(doc=2418,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.2674672 = fieldWeight in 2418, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.020511357 = product of:
          0.041022714 = sum of:
            0.041022714 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
              0.041022714 = score(doc=2418,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.23214069 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  7. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.06
    0.057403285 = product of:
      0.07653771 = sum of:
        0.027355144 = weight(_text_:science in 2831) [ClassicSimilarity], result of:
          0.027355144 = score(doc=2831,freq=4.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20579056 = fieldWeight in 2831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.032089777 = weight(_text_:research in 2831) [ClassicSimilarity], result of:
          0.032089777 = score(doc=2831,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.22288933 = fieldWeight in 2831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.017092798 = product of:
          0.034185596 = sum of:
            0.034185596 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
              0.034185596 = score(doc=2831,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.19345059 = fieldWeight in 2831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2831)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  8. Almeida, M.B.: Revisiting ontologies : a necessary clarification (2013) 0.06
    0.056394402 = product of:
      0.112788804 = sum of:
        0.051902737 = weight(_text_:science in 1010) [ClassicSimilarity], result of:
          0.051902737 = score(doc=1010,freq=10.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.39046016 = fieldWeight in 1010, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1010)
        0.060886066 = weight(_text_:research in 1010) [ClassicSimilarity], result of:
          0.060886066 = score(doc=1010,freq=10.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.42290276 = fieldWeight in 1010, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=1010)
      0.5 = coord(2/4)
    
    Abstract
    Looking for ontology in a search engine, one can find so many different approaches that it can be difficult to understand which field of research the subject belongs to and how it can be useful. The term ontology is employed within philosophy, computer science, and information science with different meanings. To take advantage of what ontology theories have to offer, one should understand what they address and where they come from. In information science, except for a few papers, there is no initiative toward clarifying what ontology really is and the connections that it fosters among different research fields. This article provides such a clarification. We begin by revisiting the meaning of the term in its original field, philosophy, to reach its current use in other research fields. We advocate that ontology is a genuine and relevant subject of research in information science. Finally, we conclude by offering our view of the opportunities for interdisciplinary research.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1682-1693
  9. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.05
    0.05168058 = product of:
      0.10336116 = sum of:
        0.08014955 = product of:
          0.24044865 = sum of:
            0.24044865 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24044865 = score(doc=400,freq=2.0), product of:
                0.42783085 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050463587 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.023211608 = weight(_text_:science in 400) [ClassicSimilarity], result of:
          0.023211608 = score(doc=400,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(2/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  10. Fonseca, F.: ¬The double role of ontologies in information science research (2007) 0.05
    0.049532443 = product of:
      0.09906489 = sum of:
        0.051902737 = weight(_text_:science in 277) [ClassicSimilarity], result of:
          0.051902737 = score(doc=277,freq=10.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.39046016 = fieldWeight in 277, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
        0.04716215 = weight(_text_:research in 277) [ClassicSimilarity], result of:
          0.04716215 = score(doc=277,freq=6.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.3275791 = fieldWeight in 277, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=277)
      0.5 = coord(2/4)
    
    Abstract
    In philosophy, Ontology is the basic description of things in the world. In information science, an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality. Ontologies have been proposed for validating both conceptual models and conceptual schemas; however, these roles are quite dissimilar. In this article, we show that ontologies can be better understood if we classify the different uses of the term as it appears in the literature. First, we explain Ontology (upper case O) as used in Philosophy. Then, we propose a differentiation between ontologies of information systems and ontologies for information systems. All three concepts have an important role in information science. We clarify the different meanings and uses of Ontology and ontologies through a comparison of research by Wand and Weber and by Guarino in ontology-driven information systems. The contributions of this article are twofold: (a) It provides a better understanding of what ontologies are, and (b) it explains the double role of ontologies in information science research.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.786-793
  11. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.05
    0.04638115 = product of:
      0.0927623 = sum of:
        0.0473805 = weight(_text_:science in 3437) [ClassicSimilarity], result of:
          0.0473805 = score(doc=3437,freq=12.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.3564397 = fieldWeight in 3437, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.0453818 = weight(_text_:research in 3437) [ClassicSimilarity], result of:
          0.0453818 = score(doc=3437,freq=8.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.31521314 = fieldWeight in 3437, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
      0.5 = coord(2/4)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.724-738
  12. Sebastian, Y.: Literature-based discovery by learning heterogeneous bibliographic information networks (2017) 0.05
    0.04574071 = product of:
      0.09148142 = sum of:
        0.060205754 = weight(_text_:research in 535) [ClassicSimilarity], result of:
          0.060205754 = score(doc=535,freq=22.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.41817746 = fieldWeight in 535, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=535)
        0.031275664 = product of:
          0.06255133 = sum of:
            0.06255133 = weight(_text_:network in 535) [ClassicSimilarity], result of:
              0.06255133 = score(doc=535,freq=4.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.2783359 = fieldWeight in 535, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Literature-based discovery (LBD) research aims at finding effective computational methods for predicting previously unknown connections between clusters of research papers from disparate research areas. Existing methods encompass two general approaches. The first approach searches for these unknown connections by examining the textual contents of research papers. In addition to the existing textual features, the second approach incorporates structural features of scientific literatures, such as citation structures. These approaches, however, have not considered research papers' latent bibliographic metadata structures as important features that can be used for predicting previously unknown relationships between them. This thesis investigates a new graph-based LBD method that exploits the latent bibliographic metadata connections between pairs of research papers. The heterogeneous bibliographic information network is proposed as an efficient graph-based data structure for modeling the complex relationships between these metadata. In contrast to previous approaches, this method seamlessly combines textual and citation information in the form of pathbased metadata features for predicting future co-citation links between research papers from disparate research fields. The results reported in this thesis provide evidence that the method is effective for reconstructing the historical literature-based discovery hypotheses. This thesis also investigates the effects of semantic modeling and topic modeling on the performance of the proposed method. For semantic modeling, a general-purpose word sense disambiguation technique is proposed to reduce the lexical ambiguity in the title and abstract of research papers. The experimental results suggest that the reduced lexical ambiguity did not necessarily lead to a better performance of the method. This thesis discusses some of the possible contributing factors to these results. Finally, topic modeling is used for learning the latent topical relations between research papers. The learned topic model is incorporated into the heterogeneous bibliographic information network graph and allows new predictive features to be learned. The results in this thesis suggest that topic modeling improves the performance of the proposed method by increasing the overall accuracy for predicting the future co-citation links between disparate research papers.
  13. Seidlmayer, E.: ¬An ontology of digital objects in philosophy : an approach for practical use in research (2018) 0.05
    0.04530736 = product of:
      0.09061472 = sum of:
        0.027080212 = weight(_text_:science in 5496) [ClassicSimilarity], result of:
          0.027080212 = score(doc=5496,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 5496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5496)
        0.06353451 = weight(_text_:research in 5496) [ClassicSimilarity], result of:
          0.06353451 = score(doc=5496,freq=8.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.44129837 = fieldWeight in 5496, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5496)
      0.5 = coord(2/4)
    
    Abstract
    The digitalization of research enables new scientific insights and methods, especially in the humanities. Nonetheless, electronic book editions, encyclopedias, mobile applications or web sites presenting research projects are not in broad use in academic philosophy. This is contradictory to the large amount of helpful tools facilitating research also bearing new scientific subjects and approaches. A possible solution to this dilemma is the systematization and promotion of these tools in order to improve their accessibility and fully exploit the potential of digitalization for philosophy.
    Footnote
    Master thesis Library and Information Science, Fakultät für Informations- und Kommunikationswissenschaften, Technische Hochschule Köln. Schön auch: Bei Google Scholar unter 'Eva, S.' nachgewiesen.
  14. Jiang, Y.-C.; Li, H.: ¬The theoretical basis and basic principles of knowledge network construction in digital library (2023) 0.04
    0.044778652 = product of:
      0.089557305 = sum of:
        0.023211608 = weight(_text_:science in 1130) [ClassicSimilarity], result of:
          0.023211608 = score(doc=1130,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 1130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1130)
        0.0663457 = product of:
          0.1326914 = sum of:
            0.1326914 = weight(_text_:network in 1130) [ClassicSimilarity], result of:
              0.1326914 = score(doc=1130,freq=8.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.59043956 = fieldWeight in 1130, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1130)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge network construction (KNC) is the essence of dynamic knowledge architecture, and is helpful to illustrate ubiquitous knowledge service in digital libraries (DLs). The authors explore its theoretical foundations and basic rules to elucidate the basic principles of KNC in DLs. The results indicate that world general connection, small-world phenomenon, relevance theory, unity and continuity of science development have been the production tool, architecture aim and scientific foundation of KNC in DLs. By analyzing both the characteristics of KNC based on different types of knowledge linking and the relationships between different forms of knowledge and the appropriate ways of knowledge linking, the basic principle of KNC is summarized as follows: let each kind of knowledge linking form each shows its ability, each kind of knowledge manifestation each answer the purpose intended in practice, and then subjective knowledge network and objective knowledge network are organically combined. This will lay a solid theoretical foundation and provide an action guide for DLs to construct knowledge networks.
  15. Conde, A.; Larrañaga, M.; Arruarte, A.; Elorriaga, J.A.; Roth, D.: litewi: a combined term extraction and entity linking method for eliciting educational ontologies from textbooks (2016) 0.04
    0.04434503 = product of:
      0.059126705 = sum of:
        0.019343007 = weight(_text_:science in 2645) [ClassicSimilarity], result of:
          0.019343007 = score(doc=2645,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.1455159 = fieldWeight in 2645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2645)
        0.0226909 = weight(_text_:research in 2645) [ClassicSimilarity], result of:
          0.0226909 = score(doc=2645,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.15760657 = fieldWeight in 2645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2645)
        0.017092798 = product of:
          0.034185596 = sum of:
            0.034185596 = weight(_text_:22 in 2645) [ClassicSimilarity], result of:
              0.034185596 = score(doc=2645,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.19345059 = fieldWeight in 2645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2645)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains-astronomy and molecular biology.
    Date
    22. 1.2016 12:38:14
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.380-399
  16. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.04
    0.04397206 = product of:
      0.058629416 = sum of:
        0.02680246 = weight(_text_:science in 4399) [ClassicSimilarity], result of:
          0.02680246 = score(doc=4399,freq=6.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20163277 = fieldWeight in 4399, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.01815272 = weight(_text_:research in 4399) [ClassicSimilarity], result of:
          0.01815272 = score(doc=4399,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.12608525 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
        0.013674239 = product of:
          0.027348477 = sum of:
            0.027348477 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
              0.027348477 = score(doc=4399,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.15476047 = fieldWeight in 4399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4399)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Modular ontologies are built in modular manner by combining modules from multiple relevant ontologies. Ontology heterogeneity also arises during modular ontology construction because multiple ontologies are being dealt with, during this process. Ontologies need to be aligned before using them for modular ontology construction. The existing approaches for ontology alignment compare all the concepts of each ontology to be aligned, hence not optimized in terms of time and search space utilization. A new indexing technique is proposed based on modular ontology. An efficient ontology alignment technique is proposed to solve the heterogeneity problem during the construction of modular ontology. Results are satisfactory as Precision and Recall are improved by (8%) and (10%) respectively. The value of Pearsons Correlation Coefficient for degree of similarity, time, search space requirement, precision and recall are close to 1 which shows that the results are significant. Further research can be carried out for using modular ontology based indexing technique for Multimedia Information Retrieval and Bio-Medical information retrieval.
    Content
    Submitted to the Faculty of the Computer Science and Engineering Department of the University of Engineering and Technology Lahore in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science (2009 - 009-PhD-CS-04). Vgl.: http://prr.hec.gov.pk/jspui/bitstream/123456789/8375/1/Taybah_Kiren_Computer_Science_HSR_2017_UET_Lahore_14.12.2017.pdf.
    Date
    20. 1.2015 18:30:22
    Imprint
    Lahore : University of Engineering and Technology / Department of Computer Science and Engineering
  17. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.04
    0.039783698 = product of:
      0.079567395 = sum of:
        0.0453818 = weight(_text_:research in 6089) [ClassicSimilarity], result of:
          0.0453818 = score(doc=6089,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.31521314 = fieldWeight in 6089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.078125 = fieldNorm(doc=6089)
        0.034185596 = product of:
          0.06837119 = sum of:
            0.06837119 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.06837119 = score(doc=6089,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Pages
    S.11-22
    Source
    Compatibility and integration of order systems: Research Seminar Proceedings of the TIP/ISKO Meeting, Warsaw, 13-15 September 1995
  18. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.04
    0.039783698 = product of:
      0.079567395 = sum of:
        0.0453818 = weight(_text_:research in 539) [ClassicSimilarity], result of:
          0.0453818 = score(doc=539,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.31521314 = fieldWeight in 539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.078125 = fieldNorm(doc=539)
        0.034185596 = product of:
          0.06837119 = sum of:
            0.06837119 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.06837119 = score(doc=539,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    26.12.2011 13:22:07
    Source
    http://www.comp.glam.ac.uk/pages/research/hypermedia/nkos/nkos2007/programme.html
  19. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.04
    0.03883488 = product of:
      0.07766976 = sum of:
        0.023211608 = weight(_text_:science in 5365) [ClassicSimilarity], result of:
          0.023211608 = score(doc=5365,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.054458156 = weight(_text_:research in 5365) [ClassicSimilarity], result of:
          0.054458156 = score(doc=5365,freq=8.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.37825575 = fieldWeight in 5365, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
      0.5 = coord(2/4)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  20. Calegari, S.; Sanchez, E.: Object-fuzzy concept network : an enrichment of ontologies in semantic information retrieval (2008) 0.04
    0.037315547 = product of:
      0.074631095 = sum of:
        0.019343007 = weight(_text_:science in 2393) [ClassicSimilarity], result of:
          0.019343007 = score(doc=2393,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.1455159 = fieldWeight in 2393, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2393)
        0.055288088 = product of:
          0.110576175 = sum of:
            0.110576175 = weight(_text_:network in 2393) [ClassicSimilarity], result of:
              0.110576175 = score(doc=2393,freq=8.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.492033 = fieldWeight in 2393, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2393)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article shows how a fuzzy ontology-based approach can improve semantic documents retrieval. After formally defining a fuzzy ontology and a fuzzy knowledge base, a special type of new fuzzy relationship called (semantic) correlation, which links the concepts or entities in a fuzzy ontology, is discussed. These correlations, first assigned by experts, are updated after querying or when a document has been inserted into a database. Moreover, in order to define a dynamic knowledge of a domain adapting itself to the context, it is shown how to handle a tradeoff between the correct definition of an object, taken in the ontology structure, and the actual meaning assigned by individuals. The notion of a fuzzy concept network is extended, incorporating database objects so that entities and documents can similarly be represented in the network. Information retrieval (IR) algorithm, using an object-fuzzy concept network (O-FCN), is introduced and described. This algorithm allows us to derive a unique path among the entities involved in the query to obtain maxima semantic associations in the knowledge domain. Finally, the study has been validated by querying a database using fuzzy recall, fuzzy precision, and coefficient variant measures in the crisp and fuzzy cases.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.13, S.2171-2185

Years

Languages

  • e 255
  • d 20
  • pt 4
  • f 1
  • More… Less…

Types

  • a 221
  • el 55
  • m 22
  • x 16
  • s 10
  • n 3
  • p 2
  • r 2
  • More… Less…

Subjects

Classifications