Search (38 results, page 1 of 2)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Zeng, M.L.; Gracy, K.F.; Zumer, M.: Using a semantic analysis tool to generate subject access points : a study using Panofsky's theory and two research samples (2014) 0.05
    0.050582923 = product of:
      0.101165846 = sum of:
        0.101165846 = sum of:
          0.059190683 = weight(_text_:theory in 1464) [ClassicSimilarity], result of:
            0.059190683 = score(doc=1464,freq=2.0), product of:
              0.21471956 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.05163523 = queryNorm
              0.27566507 = fieldWeight in 1464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=1464)
          0.041975167 = weight(_text_:22 in 1464) [ClassicSimilarity], result of:
            0.041975167 = score(doc=1464,freq=2.0), product of:
              0.18081778 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05163523 = queryNorm
              0.23214069 = fieldWeight in 1464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1464)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.02
    0.024734104 = product of:
      0.049468208 = sum of:
        0.049468208 = product of:
          0.098936416 = sum of:
            0.098936416 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.098936416 = score(doc=1352,freq=4.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  3. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.02
    0.024485514 = product of:
      0.048971027 = sum of:
        0.048971027 = product of:
          0.097942054 = sum of:
            0.097942054 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.097942054 = score(doc=2134,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
  4. Weiermann, S.L.: Semantische Netze und Begriffsdeskription in der Wissensrepräsentation (2000) 0.02
    0.024414912 = product of:
      0.048829824 = sum of:
        0.048829824 = product of:
          0.09765965 = sum of:
            0.09765965 = weight(_text_:theory in 3001) [ClassicSimilarity], result of:
              0.09765965 = score(doc=3001,freq=4.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.45482418 = fieldWeight in 3001, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3001)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    LCSH
    Information representation (Information theory)
    Subject
    Information representation (Information theory)
  5. Rekabsaz, N. et al.: Toward optimized multimodal concept indexing (2016) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
              0.06995861 = score(doc=2751,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 2751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2751)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  6. Kozikowski, P. et al.: Support of part-whole relations in query answering (2016) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.06995861 = score(doc=2754,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  7. Marx, E. et al.: Exploring term networks for semantic search over RDF knowledge graphs (2016) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 3279) [ClassicSimilarity], result of:
              0.06995861 = score(doc=3279,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 3279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  8. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.06995861 = score(doc=3280,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  9. Darányi, S.; Wittek, P.: Demonstrating conceptual dynamics in an evolving text collection (2013) 0.02
    0.017439222 = product of:
      0.034878444 = sum of:
        0.034878444 = product of:
          0.06975689 = sum of:
            0.06975689 = weight(_text_:theory in 1137) [ClassicSimilarity], result of:
              0.06975689 = score(doc=1137,freq=4.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.3248744 = fieldWeight in 1137, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1137)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on real-world user demands, we demonstrate how animated visualization of evolving text corpora displays the underlying dynamics of semantic content. To interpret the results, one needs a dynamic theory of word meaning. We suggest that conceptual dynamics as the interaction between kinds of intellectual and emotional content and language is key for such a theory. We demonstrate our method by two-way seriation, which is a popular technique to analyze groups of similar instances and their features as well as the connections between the groups themselves. The two-way seriated data may be visualized as a two-dimensional heat map or as a three-dimensional landscape in which color codes or height correspond to the values in the matrix. In this article, we focus on two-way seriation of sparse data in the Reuters-21568 test collection. To achieve a meaningful visualization, we introduce a compactly supported convolution kernel similar to filter kernels used in image reconstruction and geostatistics. This filter populates the high-dimensional sparse space with values that interpolate nearby elements and provides insight into the clustering structure. We also extend two-way seriation to deal with online updates of both the row and column spaces and, combined with the convolution kernel, demonstrate a three-dimensional visualization of dynamics.
  10. Nagao, M.: Knowledge and inference (1990) 0.02
    0.017439222 = product of:
      0.034878444 = sum of:
        0.034878444 = product of:
          0.06975689 = sum of:
            0.06975689 = weight(_text_:theory in 3304) [ClassicSimilarity], result of:
              0.06975689 = score(doc=3304,freq=4.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.3248744 = fieldWeight in 3304, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    LCSH
    Knowledge, Theory of
    Subject
    Knowledge, Theory of
  11. Adhikari, A.; Dutta, B.; Dutta, A.; Mondal, D.; Singh, S.: ¬An intrinsic information content-based semantic similarity measure considering the disjoint common subsumers of concepts of an ontology (2018) 0.02
    0.017439222 = product of:
      0.034878444 = sum of:
        0.034878444 = product of:
          0.06975689 = sum of:
            0.06975689 = weight(_text_:theory in 4372) [ClassicSimilarity], result of:
              0.06975689 = score(doc=4372,freq=4.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.3248744 = fieldWeight in 4372, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4372)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Finding similarity between concepts based on semantics has become a new trend in many applications (e.g., biomedical informatics, natural language processing). Measuring the Semantic Similarity (SS) with higher accuracy is a challenging task. In this context, the Information Content (IC)-based SS measure has gained popularity over the others. The notion of IC evolves from the science of information theory. Information theory has very high potential to characterize the semantics of concepts. Designing an IC-based SS framework comprises (i) an IC calculator, and (ii) an SS calculator. In this article, we propose a generic intrinsic IC-based SS calculator. We also introduce here a new structural aspect of an ontology called DCS (Disjoint Common Subsumers) that plays a significant role in deciding the similarity between two concepts. We evaluated our proposed similarity calculator with the existing intrinsic IC-based similarity calculators, as well as corpora-dependent similarity calculators using several benchmark data sets. The experimental results show that the proposed similarity calculator produces a high correlation with human evaluation over the existing state-of-the-art IC-based similarity calculators.
  12. Sacco, G.M.: Dynamic taxonomies and guided searches (2006) 0.02
    0.017313873 = product of:
      0.034627747 = sum of:
        0.034627747 = product of:
          0.06925549 = sum of:
            0.06925549 = weight(_text_:22 in 5295) [ClassicSimilarity], result of:
              0.06925549 = score(doc=5295,freq=4.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38301262 = fieldWeight in 5295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5295)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 17:56:22
  13. Nie, J.-Y.: Query expansion and query translation as logical inference (2003) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 1425) [ClassicSimilarity], result of:
              0.059190683 = score(doc=1425,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A number of studies have examined the problems of query expansion in monolingual Information Retrieval (IR), and query translation for crosslanguage IR. However, no link has been made between them. This article first shows that query translation is a special case of query expansion. There is also another set of studies an inferential IR. Again, there is no relationship established with query translation or query expansion. The second claim of this article is that logical inference is a general form that covers query expansion and query translation. This analysis provides a unified view of different subareas of IR. We further develop the inferential IR approach in two particular contexts: using fuzzy logic and probability theory. The evaluation formulas obtained are shown to strongly correspond to those used in other IR models. This indicates that inference is indeed the core of advanced IR.
  14. Johnson, J.D.: On contexts of information seeking (2003) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 1082) [ClassicSimilarity], result of:
              0.059190683 = score(doc=1082,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 1082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While surprisingly little has been written about context at a meaningful level, context is central to most theoretical approaches to information seeking. In this essay I explore in more detail three senses of context. First, I look at context as equivalent to the situation in which a process is immersed. Second, I discuss contingency approaches that detail active ingredients of the situation that have specific, predictable effects. Third, I examine major frameworks for meaning systems. Then, I discuss how a deeper appreciation of context can enhance our understanding of the process of information seeking by examining two vastly different contexts in which it occurs: organizational and cancer-related, an exemplar of everyday life information seeking. This essay concludes with a discussion of the value that can be added to information seeking research and theory as a result of a deeper appreciation of context, particularly in terms of our current multi-contextual environment and individuals taking an active role in contextualizing.
  15. Landauer, T.K.; Foltz, P.W.; Laham, D.: ¬An introduction to Latent Semantic Analysis (1998) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 1162) [ClassicSimilarity], result of:
              0.059190683 = score(doc=1162,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 1162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1162)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text (Landauer and Dumais, 1997). The underlying idea is that the aggregate of all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and sets of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests; it mimics human word sorting and category judgments; it simulates word-word and passage-word lexical priming data; and as reported in 3 following articles in this issue, it accurately estimates passage coherence, learnability of passages by individual students, and the quality and quantity of knowledge contained in an essay.
  16. Gnoli, C.; Pusterla, L.; Bendiscioli, A.; Recinella, C.: Classification for collections mapping and query expansion (2016) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 3102) [ClassicSimilarity], result of:
              0.059190683 = score(doc=3102,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 3102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  17. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.01
    0.013991722 = product of:
      0.027983444 = sum of:
        0.027983444 = product of:
          0.055966888 = sum of:
            0.055966888 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.055966888 = score(doc=5693,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:35:22
  18. Ng, K.B.: Toward a theoretical framework for understanding the relationship between situated action and planned action models of behavior in information retrieval contexts : contributions from phenomenology (2002) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 2588) [ClassicSimilarity], result of:
              0.049325574 = score(doc=2588,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 2588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2588)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In human-computer interaction (HCI), a successful interaction sequence can take its own momentum and drift away from what the user has originally planned. However, this does not mean that planned actions play no important role in the overall performance. In this paper, the author tries to construct a line of argument to demonstrate that it is impossible to consider an action without an a priori plan, even according to the phenomenological position taken for granted by the situated action theory. Based on the phenomenological analysis of problematic situations and typification the author argues that, just like "situated-ness", "planned-ness" of an action should also be understood in the context of the situation. Successful plan can be developed and executed for familiar context. The first part of the paper treats information seeking behavior as a special type of social action and applies Alfred Schutz's phenomenology of sociology to understand the importance and necessity of plan. The second part reports results of a quasi-experiment focusing on plan deviation within an information seeking context. It was found that when the searcher's situation changed from problematic to non-problematic, the degree of plan deviation decreased significantly. These results support the argument proposed in the first part of the paper.
  19. Baofu, P.: ¬The future of information architecture : conceiving a better way to understand taxonomy, network, and intelligence (2008) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 2257) [ClassicSimilarity], result of:
              0.049325574 = score(doc=2257,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 2257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Future of Information Architecture examines issues surrounding why information is processed, stored and applied in the way that it has, since time immemorial. Contrary to the conventional wisdom held by many scholars in human history, the recurrent debate on the explanation of the most basic categories of information (eg space, time causation, quality, quantity) has been misconstrued, to the effect that there exists some deeper categories and principles behind these categories of information - with enormous implications for our understanding of reality in general. To understand this, the book is organised in to four main parts: Part I begins with the vital question concerning the role of information within the context of the larger theoretical debate in the literature. Part II provides a critical examination of the nature of data taxonomy from the main perspectives of culture, society, nature and the mind. Part III constructively invesitgates the world of information network from the main perspectives of culture, society, nature and the mind. Part IV proposes six main theses in the authors synthetic theory of information architecture, namely, (a) the first thesis on the simpleness-complicatedness principle, (b) the second thesis on the exactness-vagueness principle (c) the third thesis on the slowness-quickness principle (d) the fourth thesis on the order-chaos principle, (e) the fifth thesis on the symmetry-asymmetry principle, and (f) the sixth thesis on the post-human stage.
  20. Chebil, W.; Soualmia, L.F.; Omri, M.N.; Darmoni, S.F.: Indexing biomedical documents with a possibilistic network (2016) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 2854) [ClassicSimilarity], result of:
              0.049325574 = score(doc=2854,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 2854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2854)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we propose a new approach for indexing biomedical documents based on a possibilistic network that carries out partial matching between documents and biomedical vocabulary. The main contribution of our approach is to deal with the imprecision and uncertainty of the indexing task using possibility theory. We enhance estimation of the similarity between a document and a given concept using the two measures of possibility and necessity. Possibility estimates the extent to which a document is not similar to the concept. The second measure can provide confirmation that the document is similar to the concept. Our contribution also reduces the limitation of partial matching. Although the latter allows extracting from the document other variants of terms than those in dictionaries, it also generates irrelevant information. Our objective is to filter the index using the knowledge provided by the Unified Medical Language System®. Experiments were carried out on different corpora, showing encouraging results (the improvement rate is +26.37% in terms of main average precision when compared with the baseline).

Years

Languages

  • e 32
  • d 5

Types