Search (7 results, page 1 of 1)

  • × author_ss:"Crestani, F."
  1. Bache, R.; Baillie, M.; Crestani, F.: Measuring the likelihood property of scoring functions in general retrieval models (2009) 0.03
    0.026878864 = product of:
      0.21503091 = sum of:
        0.21503091 = weight(_text_:property in 2860) [ClassicSimilarity], result of:
          0.21503091 = score(doc=2860,freq=6.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.848694 = fieldWeight in 2860, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2860)
      0.125 = coord(1/8)
    
    Abstract
    Although retrieval systems based on probabilistic models will rank the objects (e.g., documents) being retrieved according to the probability of some matching criterion (e.g., relevance), they rarely yield an actual probability, and the scoring function is interpreted to be purely ordinal within a given retrieval task. In this brief communication, it is shown that some scoring functions possess the likelihood property, which means that the scoring function indicates the likelihood of matching when compared to other retrieval tasks, which is potentially more useful than pure ranking although it cannot be interpreted as an actual probability. This property can be detected by using two modified effectiveness measures: entire precision and entire recall.
  2. Simeoni, F.; Yakici, M.; Neely, S.; Crestani, F.: Metadata harvesting for content-based distributed information retrieval (2008) 0.02
    0.016157467 = product of:
      0.06462987 = sum of:
        0.04381429 = weight(_text_:network in 1336) [ClassicSimilarity], result of:
          0.04381429 = score(doc=1336,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 1336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1336)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 1336) [ClassicSimilarity], result of:
              0.04163116 = score(doc=1336,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 1336, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1336)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    We propose an approach to content-based Distributed Information Retrieval based on the periodic and incremental centralization of full-content indices of widely dispersed and autonomously managed document sources. Inspired by the success of the Open Archive Initiative's (OAI) Protocol for metadata harvesting, the approach occupies middle ground between content crawling and distributed retrieval. As in crawling, some data move toward the retrieval process, but it is statistics about the content rather than content itself; this grants more efficient use of network resources and wider scope of application. As in distributed retrieval, some processing is distributed along with the data, but it is indexing rather than retrieval; this reduces the costs of content provision while promoting the simplicity, effectiveness, and responsiveness of retrieval. Overall, we argue that the approach retains the good properties of centralized retrieval without renouncing to cost-effective, large-scale resource pooling. We discuss the requirements associated with the approach and identify two strategies to deploy it on top of the OAI infrastructure. In particular, we define a minimal extension of the OAI protocol which supports the coordinated harvesting of full-content indices and descriptive metadata for content resources. Finally, we report on the implementation of a proof-of-concept prototype service for multimodel content-based retrieval of distributed file collections.
  3. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.01
    0.01291517 = product of:
      0.05166068 = sum of:
        0.035405993 = weight(_text_:computer in 6967) [ClassicSimilarity], result of:
          0.035405993 = score(doc=6967,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 6967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.032509375 = score(doc=6967,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Crestani, F.; Mizzaro, S.; Scagnetto, I,: Mobile information retrieval (2017) 0.01
    0.00975786 = product of:
      0.07806288 = sum of:
        0.07806288 = weight(_text_:computer in 4469) [ClassicSimilarity], result of:
          0.07806288 = score(doc=4469,freq=14.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.5341376 = fieldWeight in 4469, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4469)
      0.125 = coord(1/8)
    
    LCSH
    Computer science
    User interfaces (Computer systems)
    Text processing (Computer science)
    Series
    Springer briefs in computer science
    Subject
    Computer science
    User interfaces (Computer systems)
    Text processing (Computer science)
  5. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.00
    0.002031836 = product of:
      0.016254688 = sum of:
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
              0.032509375 = score(doc=1451,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 3.2003 19:27:36
  6. Crestani, F.; Du, H.: Written versus spoken queries : a qualitative and quantitative comparative analysis (2006) 0.00
    0.002031836 = product of:
      0.016254688 = sum of:
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 5047) [ClassicSimilarity], result of:
              0.032509375 = score(doc=5047,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 5047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5047)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    5. 6.2006 11:22:23
  7. Crestani, F.; Wu, S.: Testing the cluster hypothesis in distributed information retrieval (2006) 0.00
    0.0018398546 = product of:
      0.014718837 = sum of:
        0.014718837 = product of:
          0.029437674 = sum of:
            0.029437674 = weight(_text_:resources in 984) [ClassicSimilarity], result of:
              0.029437674 = score(doc=984,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.20165458 = fieldWeight in 984, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=984)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    How to merge and organise query results retrieved from different resources is one of the key issues in distributed information retrieval. Some previous research and experiments suggest that cluster-based document browsing is more effective than a single merged list. Cluster-based retrieval results presentation is based on the cluster hypothesis, which states that documents that cluster together have a similar relevance to a given query. However, while this hypothesis has been demonstrated to hold in classical information retrieval environments, it has never been fully tested in heterogeneous distributed information retrieval environments. Heterogeneous document representations, the presence of document duplicates, and disparate qualities of retrieval results, are major features of an heterogeneous distributed information retrieval environment that might disrupt the effectiveness of the cluster hypothesis. In this paper we report on an experimental investigation into the validity and effectiveness of the cluster hypothesis in highly heterogeneous distributed information retrieval environments. The results show that although clustering is affected by different retrieval results representations and quality, the cluster hypothesis still holds and that generating hierarchical clusters in highly heterogeneous distributed information retrieval environments is still a very effective way of presenting retrieval results to users.