Search (2 results, page 1 of 1)

  • × author_ss:"Daoud, M."
  • × author_ss:"Huang, J.X."
  1. Daoud, M.; Huang, J.X.: Modeling geographic, temporal, and proximity contexts for improving geotemporal search (2013) 0.00
    0.0036685336 = product of:
      0.011005601 = sum of:
        0.011005601 = weight(_text_:a in 533) [ClassicSimilarity], result of:
          0.011005601 = score(doc=533,freq=22.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21126054 = fieldWeight in 533, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=533)
      0.33333334 = coord(1/3)
    
    Abstract
    Traditional information retrieval (IR) systems show significant limitations on returning relevant documents that satisfy the user's information needs. In particular, to answer geographic and temporal user queries, the IR task becomes a nonstraightforward process where the available geographic and temporal information is often unstructured. In this article, we propose a geotemporal search approach that consists of modeling and exploiting geographic and temporal query context evidence that refers to implicit multivarying geographic and temporal intents behind the query. Modeling geographic and temporal query contexts is based on extracting and ranking geographic and temporal keywords found in pseudo-relevant feedback (PRF) documents for a given query. Our geotemporal search approach is based on exploiting the geographic and temporal query contexts separately into a probabilistic ranking model and jointly into a proximity ranking model. Our hypothesis is based on the concept that geographic and temporal expressions tend to co-occur within the document where the closer they are in the document, the more relevant the document is. Finally, geographic, temporal, and proximity scores are combined according to a linear combination formula. An extensive experimental evaluation conducted on a portion of the New York Times news collection and the TREC 2004 robust retrieval track collection shows that our geotemporal approach outperforms significantly a well-known baseline search and the best known geotemporal search approaches in the domain. Finally, an in-depth analysis shows a positive correlation between the geographic and temporal query sensitivity and the retrieval performance. Also, we find that geotemporal distance has a positive impact on retrieval performance generally.
    Type
    a
  2. Ayadi, H.; Torjmen-Khemakhem, M.; Daoud, M.; Huang, J.X.; Jemaa, M.B.: Mining correlations between medically dependent features and image retrieval models for query classification (2017) 0.00
    0.002212209 = product of:
      0.0066366266 = sum of:
        0.0066366266 = weight(_text_:a in 3607) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=3607,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 3607, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3607)
      0.33333334 = coord(1/3)
    
    Abstract
    The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
    Type
    a