Search (15 results, page 1 of 1)

  • × author_ss:"Crestani, F."
  1. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.03
    0.027413448 = product of:
      0.08224034 = sum of:
        0.08224034 = product of:
          0.123360515 = sum of:
            0.081883274 = weight(_text_:retrieval in 6967) [ClassicSimilarity], result of:
              0.081883274 = score(doc=6967,freq=14.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5305404 = fieldWeight in 6967, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
            0.04147724 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.04147724 = score(doc=6967,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.03
    0.02606365 = product of:
      0.07819095 = sum of:
        0.07819095 = product of:
          0.11728642 = sum of:
            0.07580918 = weight(_text_:retrieval in 1451) [ClassicSimilarity], result of:
              0.07580918 = score(doc=1451,freq=12.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.49118498 = fieldWeight in 1451, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
            0.04147724 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
              0.04147724 = score(doc=1451,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Research an the use of mathematical, logical, and formal methods, has been central to Information Retrieval research for a long time. Research in this area is important not only because it helps enhancing retrieval effectiveness, but also because it helps clarifying the underlying concepts of Information Retrieval. In this article we outline some of the major aspects of the subject, and summarize the papers of this special issue with respect to how they relate to these aspects. We conclude by highlighting some directions of future research, which are needed to better understand the formal characteristics of Information Retrieval.
    Date
    22. 3.2003 19:27:36
    Footnote
    Einführung zu den Beiträgen eines Themenheftes: Mathematical, logical, and formal methods in information retrieval
  3. Crestani, F.; Du, H.: Written versus spoken queries : a qualitative and quantitative comparative analysis (2006) 0.02
    0.022972263 = product of:
      0.06891679 = sum of:
        0.06891679 = product of:
          0.10337518 = sum of:
            0.06189794 = weight(_text_:retrieval in 5047) [ClassicSimilarity], result of:
              0.06189794 = score(doc=5047,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40105087 = fieldWeight in 5047, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5047)
            0.04147724 = weight(_text_:22 in 5047) [ClassicSimilarity], result of:
              0.04147724 = score(doc=5047,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23214069 = fieldWeight in 5047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5047)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The authors report on an experimental study on the differences between spoken and written queries. A set of written and spontaneous spoken queries are generated by users from written topics. These two sets of queries are compared in qualitative terms and in terms of their retrieval effectiveness. Written and spoken queries are compared in terms of length, duration, and part of speech. In addition, assuming perfect transcription of the spoken queries, written and spoken queries are compared in terms of their aptitude to describe relevant documents. The retrieval effectiveness of spoken and written queries is compared using three different information retrieval models. The results show that using speech to formulate one's information need provides a way to express it more naturally and encourages the formulation of longer queries. Despite that, longer spoken queries do not seem to significantly improve retrieval effectiveness compared with written queries.
    Date
    5. 6.2006 11:22:23
  4. Crestani, F.; Wu, S.: Testing the cluster hypothesis in distributed information retrieval (2006) 0.01
    0.009504271 = product of:
      0.028512811 = sum of:
        0.028512811 = product of:
          0.08553843 = sum of:
            0.08553843 = weight(_text_:retrieval in 984) [ClassicSimilarity], result of:
              0.08553843 = score(doc=984,freq=22.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.554223 = fieldWeight in 984, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=984)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    How to merge and organise query results retrieved from different resources is one of the key issues in distributed information retrieval. Some previous research and experiments suggest that cluster-based document browsing is more effective than a single merged list. Cluster-based retrieval results presentation is based on the cluster hypothesis, which states that documents that cluster together have a similar relevance to a given query. However, while this hypothesis has been demonstrated to hold in classical information retrieval environments, it has never been fully tested in heterogeneous distributed information retrieval environments. Heterogeneous document representations, the presence of document duplicates, and disparate qualities of retrieval results, are major features of an heterogeneous distributed information retrieval environment that might disrupt the effectiveness of the cluster hypothesis. In this paper we report on an experimental investigation into the validity and effectiveness of the cluster hypothesis in highly heterogeneous distributed information retrieval environments. The results show that although clustering is affected by different retrieval results representations and quality, the cluster hypothesis still holds and that generating hierarchical clusters in highly heterogeneous distributed information retrieval environments is still a very effective way of presenting retrieval results to users.
  5. Agosti, M.; Crestani, F.; Melucci, M.: On the use of information retrieval techniques for the automatic construction of hypertext (1997) 0.01
    0.008970889 = product of:
      0.026912667 = sum of:
        0.026912667 = product of:
          0.080738 = sum of:
            0.080738 = weight(_text_:retrieval in 150) [ClassicSimilarity], result of:
              0.080738 = score(doc=150,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 150, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=150)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Introduces what automatic authoring of a hypertext for information retrieval means. The most difficult part of the automatic construction of a hypertext is the creation of links connecting documents or document fragments that are related. Becaus of this, to many researchers it seemed natural to use information retrieval techniques for this purpose, since information retrieval has always dealt with the construction of relationships between objects mutually relevant. Presents a survey of some of the attempts toward the automatic construction of hypertexts for information retrieval. Identifies and compares scope, advantages and limitations of different approaches. Points out the main and most successful current lines of research
  6. Simeoni, F.; Yakici, M.; Neely, S.; Crestani, F.: Metadata harvesting for content-based distributed information retrieval (2008) 0.01
    0.008596936 = product of:
      0.025790809 = sum of:
        0.025790809 = product of:
          0.077372424 = sum of:
            0.077372424 = weight(_text_:retrieval in 1336) [ClassicSimilarity], result of:
              0.077372424 = score(doc=1336,freq=18.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.50131357 = fieldWeight in 1336, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1336)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We propose an approach to content-based Distributed Information Retrieval based on the periodic and incremental centralization of full-content indices of widely dispersed and autonomously managed document sources. Inspired by the success of the Open Archive Initiative's (OAI) Protocol for metadata harvesting, the approach occupies middle ground between content crawling and distributed retrieval. As in crawling, some data move toward the retrieval process, but it is statistics about the content rather than content itself; this grants more efficient use of network resources and wider scope of application. As in distributed retrieval, some processing is distributed along with the data, but it is indexing rather than retrieval; this reduces the costs of content provision while promoting the simplicity, effectiveness, and responsiveness of retrieval. Overall, we argue that the approach retains the good properties of centralized retrieval without renouncing to cost-effective, large-scale resource pooling. We discuss the requirements associated with the approach and identify two strategies to deploy it on top of the OAI infrastructure. In particular, we define a minimal extension of the OAI protocol which supports the coordinated harvesting of full-content indices and descriptive metadata for content resources. Finally, we report on the implementation of a proof-of-concept prototype service for multimodel content-based retrieval of distributed file collections.
  7. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 7522) [ClassicSimilarity], result of:
              0.07294742 = score(doc=7522,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 7522, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7522)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  8. Crestani, F.: Combination of similarity measures for effective spoken document retrieval (2003) 0.01
    0.008023808 = product of:
      0.024071421 = sum of:
        0.024071421 = product of:
          0.07221426 = sum of:
            0.07221426 = weight(_text_:retrieval in 4690) [ClassicSimilarity], result of:
              0.07221426 = score(doc=4690,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 4690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4690)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  9. Bache, R.; Baillie, M.; Crestani, F.: Measuring the likelihood property of scoring functions in general retrieval models (2009) 0.01
    0.008023808 = product of:
      0.024071421 = sum of:
        0.024071421 = product of:
          0.07221426 = sum of:
            0.07221426 = weight(_text_:retrieval in 2860) [ClassicSimilarity], result of:
              0.07221426 = score(doc=2860,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 2860, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2860)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Although retrieval systems based on probabilistic models will rank the objects (e.g., documents) being retrieved according to the probability of some matching criterion (e.g., relevance), they rarely yield an actual probability, and the scoring function is interpreted to be purely ordinal within a given retrieval task. In this brief communication, it is shown that some scoring functions possess the likelihood property, which means that the scoring function indicates the likelihood of matching when compared to other retrieval tasks, which is potentially more useful than pure ranking although it cannot be interpreted as an actual probability. This property can be detected by using two modified effectiveness measures: entire precision and entire recall.
  10. Crestani, F.; Vegas, J.; Fuente, P. de la: ¬A graphical user interface for the retrieval of hierarchically structured documents (2004) 0.01
    0.007689334 = product of:
      0.023068001 = sum of:
        0.023068001 = product of:
          0.069204 = sum of:
            0.069204 = weight(_text_:retrieval in 2555) [ClassicSimilarity], result of:
              0.069204 = score(doc=2555,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.44838852 = fieldWeight in 2555, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2555)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Past research has proved that graphical user interfaces (GUIs) can significantly improve the effectiveness of the information access task. Our work is based on the consideration that structured document retrieval requires different user graphical interfaces from standard information retrieval. In structured document retrieval a GUI has to enable a user to query, browse retrieved documents, provide query refinement and relevance feedback based not only on full documents, but also on specific document parts in relation to the document structure. In this paper, we present a new GUI for structured document retrieval specifically designed for hierarchically structured documents. A user task-oriented evaluation has shown that the proposed interface provides the user with an intuitive and powerful set of tools for structured document searching, retrieved list navigation, and search refinement.
  11. Crestani, F.; Mizzaro, S.; Scagnetto, I,: Mobile information retrieval (2017) 0.01
    0.00701937 = product of:
      0.021058109 = sum of:
        0.021058109 = product of:
          0.06317432 = sum of:
            0.06317432 = weight(_text_:retrieval in 4469) [ClassicSimilarity], result of:
              0.06317432 = score(doc=4469,freq=12.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40932083 = fieldWeight in 4469, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4469)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This book offers a helpful starting point in the scattered, rich, and complex body of literature on Mobile Information Retrieval (Mobile IR), reviewing more than 200 papers in nine chapters. Highlighting the most interesting and influential contributions that have appeared in recent years, it particularly focuses on both user interaction and techniques for the perception and use of context, which, taken together, shape much of today's research on Mobile IR. The book starts by addressing the differences between IR and Mobile IR, while also reviewing the foundations of Mobile IR research. It then examines the different kinds of documents, users, and information needs that can be found in Mobile IR, and which set it apart from standard IR. Next, it discusses the two important issues of user interfaces and context-awareness. In closing, it covers issues related to the evaluation of Mobile IR applications. Overall, the book offers a valuable tool, helping new and veteran researchers alike to navigate this exciting and highly dynamic area of research.
    LCSH
    Information storage and retrieval
    RSWK
    Information Retrieval
    Subject
    Information Retrieval
    Information storage and retrieval
  12. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by logical imaging (1995) 0.01
    0.006948821 = product of:
      0.020846462 = sum of:
        0.020846462 = product of:
          0.062539384 = sum of:
            0.062539384 = weight(_text_:retrieval in 1759) [ClassicSimilarity], result of:
              0.062539384 = score(doc=1759,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40520695 = fieldWeight in 1759, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1759)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The evaluation of an implication by imaging is a logical technique developed in the framework of modal logic. Its interpretation in the context of a 'possible worlds' semantics is very appealing for information retrieval. In 19889, Van Rijsbergen suggested its use for solving 1 of the fundamental problems of logical models of information retrieval: the evaluation of the logical implication that a document is relevant to a query if it implies the query. Since then, others have tried to follow that suggestion proposing models and applications, though without much success. Most of these approaches had as their basic assunption the consideration that ' document is a possible world'. Proposes instead an approach based on a completely different assumption: ' a term is a possible world'. This approach enables the exploitation of term-term relationships which are estimated using an information theoretic measure
  13. Agosti, M.; Crestani, F.; Melucci, M.: Design and implementation of a tool for the automatic construction of hypertexts for information retrieval (1996) 0.01
    0.006948821 = product of:
      0.020846462 = sum of:
        0.020846462 = product of:
          0.062539384 = sum of:
            0.062539384 = weight(_text_:retrieval in 5571) [ClassicSimilarity], result of:
              0.062539384 = score(doc=5571,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40520695 = fieldWeight in 5571, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5571)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the design and implementation of TACHIR, a tool for the automatic construction of hypertexts for information retrieval. Through the use of an authoring methodology employing a set of well known information retrieval techniques, TACHIR automatically builds up a hypertext from a document collection. The structure of the hypertext reflects a 3 level conceptual model which enables navigation among documents, index terms, and concepts using automatically determined links. The hypertext is implemented using the HTML language. It can be distributed on different sites and different machines over the Internet, and it can be navigated using WWW interfaces
  14. Tombros, T.; Crestani, F.: Users' perception of relevance of spoken documents (2000) 0.00
    0.004011904 = product of:
      0.012035711 = sum of:
        0.012035711 = product of:
          0.03610713 = sum of:
            0.03610713 = weight(_text_:retrieval in 4996) [ClassicSimilarity], result of:
              0.03610713 = score(doc=4996,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 4996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4996)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We present the results of a study of user's perception of relevance of documents. The aim is to study experimentally how users' perception varies depending on the form that retrieved documents are presented. Documents retrieved in response to a query are presented to users in a variety of ways, from full text to a machine spoken query-biased automatically-generated summary, and the difference in users' perception of relevance is studied. The experimental results suggest that the effectiveness of advanced multimedia Information Retrieval applications may be affected by the low level of users' perception of relevance of retrieved documents
  15. Keikha, M.; Crestani, F.; Carman, M.J.: Employing document dependency in blog search (2012) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 4987) [ClassicSimilarity], result of:
              0.025790809 = score(doc=4987,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 4987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4987)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal in blog search is to rank blogs according to their recurrent relevance to the topic of the query. State-of-the-art approaches view it as an expert search or resource selection problem. We investigate the effect of content-based similarity between posts on the performance of the retrieval system. We test two different approaches for smoothing (regularizing) relevance scores of posts based on their dependencies. In the first approach, we smooth term distributions describing posts by performing a random walk over a document-term graph in which similar posts are highly connected. In the second, we directly smooth scores for posts using a regularization framework that aims to minimize the discrepancy between scores for similar documents. We then extend these approaches to consider the time interval between the posts in smoothing the scores. The idea is that if two posts are temporally close, then they are good sources for smoothing each other's relevance scores. We compare these methods with the state-of-the-art approaches in blog search that employ Language Modeling-based resource selection algorithms and fusion-based methods for aggregating post relevance scores. We show performance gains over the baseline techniques which do not take advantage of the relation between posts for smoothing relevance estimates.