Search (5 results, page 1 of 1)

  • × author_ss:"Lalmas, M."
  1. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.04
    0.039793458 = product of:
      0.11938037 = sum of:
        0.11938037 = sum of:
          0.085618846 = weight(_text_:van in 1451) [ClassicSimilarity], result of:
            0.085618846 = score(doc=1451,freq=2.0), product of:
              0.23160313 = queryWeight, product of:
                5.5765896 = idf(docFreq=454, maxDocs=44218)
                0.04153132 = queryNorm
              0.36967915 = fieldWeight in 1451, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.5765896 = idf(docFreq=454, maxDocs=44218)
                0.046875 = fieldNorm(doc=1451)
          0.033761524 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
            0.033761524 = score(doc=1451,freq=2.0), product of:
              0.14543562 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04153132 = queryNorm
              0.23214069 = fieldWeight in 1451, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1451)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2003 19:27:36
  2. Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Special issue on model design, formulation and explanation in information retrieval using mathematics (2006) 0.03
    0.028539617 = product of:
      0.085618846 = sum of:
        0.085618846 = product of:
          0.17123769 = sum of:
            0.17123769 = weight(_text_:van in 110) [ClassicSimilarity], result of:
              0.17123769 = score(doc=110,freq=2.0), product of:
                0.23160313 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04153132 = queryNorm
                0.7393583 = fieldWeight in 110, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.09375 = fieldNorm(doc=110)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  3. Ruthven, I.; Lalmas, M.; Rijsbergen, K. van: Combining and selecting characteristics of information use (2002) 0.01
    0.013453705 = product of:
      0.040361114 = sum of:
        0.040361114 = product of:
          0.08072223 = sum of:
            0.08072223 = weight(_text_:van in 5208) [ClassicSimilarity], result of:
              0.08072223 = score(doc=5208,freq=4.0), product of:
                0.23160313 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04153132 = queryNorm
                0.34853685 = fieldWeight in 5208, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5208)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ruthven, Lalmas, and van Rijsbergen use traditional term importance measures like inverse document frequency, noise, based upon in-document frequency, and term frequency supplemented by theme value which is calculated from differences of expected positions of words in a text from their actual positions, on the assumption that even distribution indicates term association with a main topic, and context, which is based on a query term's distance from the nearest other query term relative to the average expected distribution of all query terms in the document. They then define document characteristics like specificity, the sum of all idf values in a document over the total terms in the document, or document complexity, measured by the documents average idf value; and information to noise ratio, info-noise, tokens after stopping and stemming over tokens before these processes, measuring the ratio of useful and non-useful information in a document. Retrieval tests are then carried out using each characteristic, combinations of the characteristics, and relevance feedback to determine the correct combination of characteristics. A file ranks independently of query terms by both specificity and info-noise, but if presence of a query term is required unique rankings are generated. Tested on five standard collections the traditional characteristics out preformed the new characteristics, which did, however, out preform random retrieval. All possible combinations of characteristics were also tested both with and without a set of scaling weights applied. All characteristics can benefit by combination with another characteristic or set of characteristics and performance as a single characteristic is a good indicator of performance in combination. Larger combinations tended to be more effective than smaller ones and weighting increased precision measures of middle ranking combinations but decreased the ranking of poorer combinations. The best combinations vary for each collection, and in some collections with the addition of weighting. Finally, with all documents ranked by the all characteristics combination, they take the top 30 documents and calculate the characteristic scores for each term in both the relevant and the non-relevant sets. Then taking for each query term the characteristics whose average was higher for relevant than non-relevant documents the documents are re-ranked. The relevance feedback method of selecting characteristics can select a good set of characteristics for query terms.
  4. Rijsbergen, C.J. van; Lalmas, M.: Information calculus for information retrieval (1996) 0.01
    0.011891507 = product of:
      0.03567452 = sum of:
        0.03567452 = product of:
          0.07134904 = sum of:
            0.07134904 = weight(_text_:van in 4201) [ClassicSimilarity], result of:
              0.07134904 = score(doc=4201,freq=2.0), product of:
                0.23160313 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04153132 = queryNorm
                0.30806595 = fieldWeight in 4201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4201)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  5. Ruthven, T.; Lalmas, M.; Rijsbergen, K.van: Incorporating user research behavior into relevance feedback (2003) 0.01
    0.011891507 = product of:
      0.03567452 = sum of:
        0.03567452 = product of:
          0.07134904 = sum of:
            0.07134904 = weight(_text_:van in 5169) [ClassicSimilarity], result of:
              0.07134904 = score(doc=5169,freq=2.0), product of:
                0.23160313 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04153132 = queryNorm
                0.30806595 = fieldWeight in 5169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5169)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ruthven, Mounia, and van Rijsbergen rank and select terms for query expansion using information gathered on searcher evaluation behavior. Using the TREC Financial Times and Los Angeles Times collections and search topics from TREC-6 placed in simulated work situations, six student subjects each preformed three searches on an experimental system and three on a control system with instructions to search by natural language expression in any way they found comfortable. Searching was analyzed for behavior differences between experimental and control situations, and for effectiveness and perceptions. In three experiments paired t-tests were the analysis tool with controls being a no relevance feedback system, a standard ranking for automatic expansion system, and a standard ranking for interactive expansion while the experimental systems based ranking upon user information on temporal relevance and partial relevance. Two further experiments compare using user behavior (number assessed relevant and similarity of relevant documents) to choose a query expansion technique against a non-selective technique and finally the effect of providing the user with knowledge of the process. When partial relevance data and time of assessment data are incorporated in term ranking more relevant documents were recovered in fewer iterations, however retrieval effectiveness overall was not improved. The subjects, none-the-less, rated the suggested terms as more useful and used them more heavily. Explanations of what the feedback techniques were doing led to higher use of the techniques.