Search (2 results, page 1 of 1)

  • × author_ss:"Müller, H."
  • × language_ss:"e"
  1. Seco de Herrera, A.G.; Schaer, R.; Müller, H.: Shangri-La : a medical case-based retrieval tool (2017) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 3924) [ClassicSimilarity], result of:
          0.010929906 = score(doc=3924,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 3924, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3924)
      0.16666667 = coord(1/6)
    
    Abstract
    Large amounts of medical visual data are produced in hospitals daily and made available continuously via publications in the scientific literature, representing the medical knowledge. However, it is not always easy to find the desired information and in clinical routine the time to fulfil an information need is often very limited. Information retrieval systems are a useful tool to provide access to these documents/images in the biomedical literature related to information needs of medical professionals. Shangri-La is a medical retrieval system that can potentially help clinicians to make decisions on difficult cases. It retrieves articles from the biomedical literature when querying a case description and attached images. The system is based on a multimodal retrieval approach with a focus on the integration of visual information connected to text. The approach includes a query-adaptive multimodal fusion criterion that analyses if visual features are suitable to be fused with text for the retrieval. Furthermore, image modality information is integrated in the retrieval step. The approach is evaluated using the ImageCLEFmed 2013 medical retrieval benchmark and can thus be compared to other approaches. Results show that the final approach outperforms the best multimodal approach submitted to ImageCLEFmed 2013.
    Footnote
    Beitrag in einem Special issue on biomedical information retrieval.
  2. Müller, W.; Marchand-Maillet, S.; Müller, H.: Evaluating image browsers using structured annotation (2001) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 6535) [ClassicSimilarity], result of:
          0.008834538 = score(doc=6535,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 6535, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6535)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article we address the problem of benchmarking image browsers. Image browsers are systems that help the user in finding an image from scratch, as opposed to query by example (OBE), where an example image is needed. The existence of different search paradigms for image browsers makes it difficult to compare image browsers. Currently, the only admissible way of evaluation is by conducting large-scale user studies. This makes it difficult to use such an evaluation as a tool for improving browsing systems. As a solution, we propose an automatic image browser benchmark that uses structured text annotation of the image collection for the simulation of the user's needs. We apply such a benchmark on an example system