Search (45 results, page 3 of 3)

  • × language_ss:"e"
  • × theme_ss:"Formale Begriffsanalyse"
  1. Carpineto, C.; Romano, G.: Order-theoretical ranking (2000) 0.00
    0.0013197502 = product of:
      0.007258626 = sum of:
        0.0055226083 = weight(_text_:a in 4766) [ClassicSimilarity], result of:
          0.0055226083 = score(doc=4766,freq=16.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.18016359 = fieldWeight in 4766, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4766)
        0.0017360178 = weight(_text_:s in 4766) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=4766,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 4766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4766)
      0.18181819 = coord(2/11)
    
    Abstract
    Current best-match ranking (BMR) systems perform well but cannot handle word mismatch between a query and a document. The best known alternative ranking method, hierarchical clustering-based ranking (HCR), seems to be more robust than BMR with respect to this problem, but it is hampered by theoretical and practical limitations. We present an approach to document ranking that explicitly addresses the word mismatch problem by exploiting interdocument similarity information in a novel way. Document ranking is seen as a query-document transformation driven by a conceptual representation of the whole document collection, into which the query is merged. Our approach is nased on the theory of concept (or Galois) lattices, which, er argue, provides a powerful, well-founded, and conputationally-tractable framework to model the space in which documents and query are represented and to compute such a transformation. We compared information retrieval using concept lattice-based ranking (CLR) to BMR and HCR. The results showed that HCR was outperformed by CLR as well as BMR, and suggested that, of the two best methods, BMR achieved better performance than CLR on the whole document set, whereas CLR compared more favorably when only the first retrieved documents were used for evaluation. We also evaluated the three methods' specific ability to rank documents that did not match the query, in which case the speriority of CLR over BMR and HCR was apparent
    Source
    Journal of the American Society for Information Science. 51(2000) no.7, S.587-601
    Type
    a
  2. Neuss, C.; Kent, R.E.: Conceptual analysis of resource meta-information (1995) 0.00
    0.0013083118 = product of:
      0.007195715 = sum of:
        0.0044180867 = weight(_text_:a in 2204) [ClassicSimilarity], result of:
          0.0044180867 = score(doc=2204,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14413087 = fieldWeight in 2204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2204)
        0.0027776284 = weight(_text_:s in 2204) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=2204,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 2204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2204)
      0.18181819 = coord(2/11)
    
    Abstract
    With the continuously growing amount of Internet accessible information resources, locating relevant information in the WWW becomes increasingly difficult. Recent developments provide scalable mechanisms for maintaing indexes of network accessible information. In order to implement sophisticated retrieval engines, a means of automatic analysis and classification of document meta information has to be found. Proposes the use of methods from the mathematical theory of concept analysis to analyze and interactively explore the information space defined by wide area resource discovery services
    Source
    Computer networks and ISDN systems. 27(1995) no.6, S.973-984
    Type
    a
  3. Kumar, C.A.; Radvansky, M.; Annapurna, J.: Analysis of Vector Space Model, Latent Semantic Indexing and Formal Concept Analysis for information retrieval (2012) 0.00
    0.001144773 = product of:
      0.006296251 = sum of:
        0.003865826 = weight(_text_:a in 2710) [ClassicSimilarity], result of:
          0.003865826 = score(doc=2710,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.12611452 = fieldWeight in 2710, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2710)
        0.0024304248 = weight(_text_:s in 2710) [ClassicSimilarity], result of:
          0.0024304248 = score(doc=2710,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08408674 = fieldWeight in 2710, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2710)
      0.18181819 = coord(2/11)
    
    Abstract
    Latent Semantic Indexing (LSI), a variant of classical Vector Space Model (VSM), is an Information Retrieval (IR) model that attempts to capture the latent semantic relationship between the data items. Mathematical lattices, under the framework of Formal Concept Analysis (FCA), represent conceptual hierarchies in data and retrieve the information. However both LSI and FCA uses the data represented in form of matrices. The objective of this paper is to systematically analyze VSM, LSI and FCA for the task of IR using the standard and real life datasets.
    Source
    Cybernetics and information technologies. 12(2012) no.1, S.34-48
    Type
    a
  4. Eklund. P.W.: Logic-based networks : concept graphs and conceptual structures (2000) 0.00
    0.0011166352 = product of:
      0.006141493 = sum of:
        0.0040582716 = weight(_text_:a in 5084) [ClassicSimilarity], result of:
          0.0040582716 = score(doc=5084,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.13239266 = fieldWeight in 5084, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5084)
        0.0020832212 = weight(_text_:s in 5084) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=5084,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 5084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=5084)
      0.18181819 = coord(2/11)
    
    Abstract
    Logic-based networks are semantic networks that support reasoning capabilities. In this paper, knowledge processing within logicbased networks is viewed as three stages. The first stage involves the formation of concepts and relations: the basic primitives with which we wish to formulate knowledge. The second stage involves the formation of wellformed formulas that express knowledge about the primitive concepts and relations once isolated. The final stage involves efficiently processing the wffs to the desired end. Our research involves each of these steps as they relate to Sowa's conceptual structures and Wille's concept lattices. Formal Concept Analysis gives us a capability to perform concept formation via symbolic machine learning. Concept(ual) Graphs provide a means to describe relational properties between primitive concept and relation types. Finally, techniques from other areas of computer science are required to compute logic-based networks efficiently. This paper illustrates the three stages of knowledge processing in practical terms using examples from our research
    Pages
    S.399-420
    Type
    a
  5. Sowa, J.F.: Knowledge representation : logical, philosophical, and computational foundations (2000) 0.00
    4.4189542E-4 = product of:
      0.0048608496 = sum of:
        0.0048608496 = weight(_text_:s in 4360) [ClassicSimilarity], result of:
          0.0048608496 = score(doc=4360,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.16817348 = fieldWeight in 4360, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=4360)
      0.09090909 = coord(1/11)
    
    Pages
    XIV,594 S