Search (2 results, page 1 of 1)

  • × author_ss:"Antani, S."
  1. Apostolova, E.; You, D.; Xue, Z.; Antani, S.; Demner-Fushman, D.; Thoma, G.R.: Image retrieval from scientific publications : text and image content processing to separate multipanel figures (2013) 0.01
    0.005000397 = product of:
      0.03500278 = sum of:
        0.0050448296 = weight(_text_:information in 740) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=740,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=740)
        0.029957948 = weight(_text_:retrieval in 740) [ClassicSimilarity], result of:
          0.029957948 = score(doc=740,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 740, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=740)
      0.14285715 = coord(2/14)
    
    Abstract
    Images contained in scientific publications are widely considered useful for educational and research purposes, and their accurate indexing is critical for efficient and effective retrieval. Such image retrieval is complicated by the fact that figures in the scientific literature often combine multiple individual subfigures (panels). Multipanel figures are in fact the predominant pattern in certain types of scientific publications. The goal of this work is to automatically segment multipanel figures-a necessary step for automatic semantic indexing and in the development of image retrieval systems targeting the scientific literature. We have developed a method that uses the image content as well as the associated figure caption to: (1) automatically detect panel boundaries; (2) detect panel labels in the images and convert them to text; and (3) detect the labels and textual descriptions of each panel within the captions. Our approach combines the output of image-content and text-based processing steps to split the multipanel figures into individual subfigures and assign to each subfigure its corresponding section of the caption. The developed system achieved precision of 81% and recall of 73% on the task of automatic segmentation of multipanel figures.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.893-908
  2. Zou, J.; Thoma, G.; Antani, S.: Unified deep neural network for segmentation and labeling of multipanel biomedical figures (2020) 0.00
    6.241359E-4 = product of:
      0.008737902 = sum of:
        0.008737902 = weight(_text_:information in 10) [ClassicSimilarity], result of:
          0.008737902 = score(doc=10,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 10, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=10)
      0.071428575 = coord(1/14)
    
    Abstract
    Recent efforts in biomedical visual question answering (VQA) research rely on combined information gathered from the image content and surrounding text supporting the figure. Biomedical journals are a rich source of information for such multimodal content indexing. For multipanel figures in these journals, it is critical to develop automatic figure panel splitting and label recognition algorithms to associate individual panels with text metadata in the figure caption and the body of the article. Challenges in this task include large variations in figure panel layout, label location, size, contrast to background, and so on. In this work, we propose a deep convolutional neural network, which splits the panels and recognizes the panel labels in a single step. Visual features are extracted from several layers at various depths of the backbone neural network and organized to form a feature pyramid. These features are fed into classification and regression networks to generate candidates of panels and their labels. These candidates are merged to create the final panel segmentation result through a beam search algorithm. We evaluated the proposed algorithm on the ImageCLEF data set and achieved better performance than the results reported in the literature. In order to thoroughly investigate the proposed algorithm, we also collected and annotated our own data set of 10,642 figures. The experiments, trained on 9,642 figures and evaluated on the remaining 1,000 figures, show that combining panel splitting and panel label recognition mutually benefit each other.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.11, S.1327-1340