Search (6 results, page 1 of 1)

  • × author_ss:"Bruza, P.D."
  1. Huibers, T.W.C.; Bruza, P.D.: Situations, a general framework for studying information retrieval (1996) 0.02
    0.022235535 = product of:
      0.04447107 = sum of:
        0.04447107 = sum of:
          0.007030784 = weight(_text_:a in 6963) [ClassicSimilarity], result of:
            0.007030784 = score(doc=6963,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.13239266 = fieldWeight in 6963, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=6963)
          0.037440285 = weight(_text_:22 in 6963) [ClassicSimilarity], result of:
            0.037440285 = score(doc=6963,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 6963, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6963)
      0.5 = coord(1/2)
    
    Abstract
    Presents a framework for the theoretical comparison of information retrieval models based on how the models decide aboutness. The framework is based on concepts emerging from the field of situation theory. So called infons and profons represent elementary information carriers which can be manipulated by unions and fusion operators. These operators allow relationships between information carriers to be established. Sets of infons form so called situations which are used to model the information born by objects such as documents. Demonstrates how an arbitrary information retrieval model can be mapped down into the framework with special functions defined for this purpose depending on the model at hand. 2 examples are given based on the Boolean retrieval and coordination level matching models. Starting from an axiomatization of aboutness, retrieval models can be compared according to which axioms they are governed by
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
    Type
    a
  2. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.02
    0.021209672 = product of:
      0.042419344 = sum of:
        0.042419344 = sum of:
          0.011219106 = weight(_text_:a in 1428) [ClassicSimilarity], result of:
            0.011219106 = score(doc=1428,freq=22.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.21126054 = fieldWeight in 1428, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
          0.03120024 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
            0.03120024 = score(doc=1428,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
      0.5 = coord(1/2)
    
    Abstract
    Humans can make hasty, but generally robust judgements about what a text fragment is, or is not, about. Such judgements are termed information inference. This article furnishes an account of information inference from a psychologistic stance. By drawing an theories from nonclassical logic and applied cognition, an information inference mechanism is proposed that makes inferences via computations of information flow through an approximation of a conceptual space. Within a conceptual space information is represented geometrically. In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus. Two approaches were presented for priming vector representations according to context. The first approach uses a concept combination heuristic to adjust the vector representation of a concept in the light of the representation of another concept. The second approach computes a prototypical concept an the basis of exemplar trace texts and moves it in the dimensional space according to the context. Information inference is evaluated by measuring the effectiveness of query models derived by information flow computations. Results show that information flow contributes significantly to query model effectiveness, particularly with respect to precision. Moreover, retrieval effectiveness compares favorably with two probabilistic query models, and another based an semantic association. More generally, this article can be seen as a contribution towards realizing operational systems that mimic text-based human reasoning.
    Date
    22. 3.2003 19:35:46
    Type
    a
  3. Bruza, P.D.; Huibers, T.W.C.: ¬A study of aboutness in information retrieval (1996) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 7705) [ClassicSimilarity], result of:
              0.011600202 = score(doc=7705,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 7705, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7705)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Addresses the notion of aboutness in information retrieval. Shows how aboutness relates to relevance. Summarizes how aboutness is defined in information retrieval models. Analyzes a model-theoretic definition of aboutness in an abstract setting using so called information fields. These allow properties of aboutness to be expressed independent of any given information retrieval model. Compares the Boolean and coordinate retrieval models. Employs preferential entailment and conditional probabilities to define aboutness between primitive information carriers. Highlights the nonmonotonic behaviour of aboutness under information composition. Analyzes a term aboutness definition drawn from a network based probabilistic framework. Draws conclusions about the implied retrieval effectiveness
    Footnote
    Contribution to a special issue on the application of artificial intelligence to information retrieval
    Type
    a
  4. Proper, H.A.; Bruza, P.D.: What is information discovery about? (1999) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3912) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3912,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3912, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Internet has led to an increase in the quantity and diversity of information available for searching. Furthermore, users are bombarded by a constant barrage of electronic messages in the form of e-mail, faxes, etc. This has led to a plethora of search engines, 'intelligent'agents, etc., that aim to help users in their quest for relevant information, or shield them against irrelevant information. All these systems aim to identify the potentially relevant information in among a large pool of available information. No unifying underlying theory for information discovery systems exist as yet. The aim of this article is to provide a logic-based framework for information discovery, and relate this to the traditional field of information retrieval. Furthermore, the often ignored user receives special emphasis. In information discovery, a good understanding of a user's (sometimes hidden) needs and beliefs is essential. We will develop a logic-based approach to express the mechanics of information discovery, while the pragmatics are based on an analysis of the underlying informational semantics of information carriers and information needs of users
    Type
    a
  5. Lau, R.Y.K.; Bruza, P.D.; Song, D.: Belief revision for adaptive information retrieval (2004) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4077) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4077,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4077)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Hoenkamp, E.; Bruza, P.D.; Song, D.; Huang, Q.: ¬An effective approach to verbose queries using a limited dependencies language model (2009) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 2122) [ClassicSimilarity], result of:
              0.007654148 = score(doc=2122,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 2122, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Intuitively, any 'bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
    Type
    a