Search (6 results, page 1 of 1)

  • × author_ss:"Rowe, N.C."
  • × year_i:[1990 TO 2000}
  1. Rowe, N.C.; Guglielma, E.J.: Exploiting captions in retrieval of multimedia data (1993) 0.05
    0.051048357 = product of:
      0.102096714 = sum of:
        0.07242205 = weight(_text_:data in 5815) [ClassicSimilarity], result of:
          0.07242205 = score(doc=5815,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 5815, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5815)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 5815) [ClassicSimilarity], result of:
              0.05934933 = score(doc=5815,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 5815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5815)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Descriptive natural language captions can help organize multimedia data. Describes the MARIE system which interprets English queries directing the retrieval of media objects. It exploits previously interpreted and indexed English captions for the media objects. Routing filtering of queries through descriptively complex captions before retrieving data can improve retrieval speed, as media data are often bulky and tinme-consuming to retrieve and difficult upon which to perform content analysis and even small improvements to query precision can pay off. Handling the English of captions and queries about does not require deep understanding, just a comprehensive type hierarchy for captions concepts. An important innovation of MARIE is supercaptions describing sets of captions which can ninimize caption redundancy
    Source
    Information processing and management. 29(1993) no.4, S.453-461
  2. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.05
    0.046197 = product of:
      0.092394 = sum of:
        0.06271934 = weight(_text_:data in 7296) [ClassicSimilarity], result of:
          0.06271934 = score(doc=7296,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 7296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 7296) [ClassicSimilarity], result of:
              0.05934933 = score(doc=7296,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 7296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  3. Rowe, N.C.: Using local optimality criteria for efficient information retrieval with redundant information filters (1996) 0.04
    0.03764897 = product of:
      0.07529794 = sum of:
        0.04138403 = weight(_text_:data in 5594) [ClassicSimilarity], result of:
          0.04138403 = score(doc=5594,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 5594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5594)
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 5594) [ClassicSimilarity], result of:
              0.067827806 = score(doc=5594,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 5594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5594)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Discusses information filters, particularly redundant information filters, for reducing the number of possibilities before retrieval. Develops simple polynomial-time local criteria for optimal execution plans and shows that most forms of concurrency are suboptimal with information filters. The local optimality criteria find the global optimum with 15 or fewer filters. Applies these ideas to the retrieval of captioned data using natural language understanding in which the natural language processing may cause a bottleneck of not well implemented
  4. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 6624) [ClassicSimilarity], result of:
          0.043894395 = score(doc=6624,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 6624, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=6624)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 6624) [ClassicSimilarity], result of:
              0.05087085 = score(doc=6624,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 6624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6624)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
  5. Rowe, N.C.; Frew, B.: Automatic caption localization for photographs on world Wide Web pages (1998) 0.01
    0.014837332 = product of:
      0.05934933 = sum of:
        0.05934933 = product of:
          0.11869866 = sum of:
            0.11869866 = weight(_text_:processing in 2689) [ClassicSimilarity], result of:
              0.11869866 = score(doc=2689,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.6261658 = fieldWeight in 2689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2689)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 34(1998) no.1, S.95-107
  6. Rowe, N.C.: Precise and efficient retrieval of captioned images : the MARIE project (1999) 0.01
    0.0063588563 = product of:
      0.025435425 = sum of:
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 847) [ClassicSimilarity], result of:
              0.05087085 = score(doc=847,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 847, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=847)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The MARIE project has explored knowledge-based information retrieval of captioned images of the kind found in picture libraries and on the Internet. It exploits the idea that images are easier to understand with context, especially descriptive text near them, but it also does image analysis. The MARIE approach has five parts: (1) find the images and captions; (2) parse and interpret the captions; (3) segment the images into regions of homogeneous characteristics and classify them; (4) correlate caption interpretation with image interpretation using the idea of focus; and (5) optimize query execution at run time. MARIE emphasizes domain-independent methods for portability at the expense of some performance, although some domain specification is still required. Experiments show MARIE prototypes are more accurate than simpler methods, although the task is very challenging and more work is needed. Its processing is illustrated in detail on part of an Internet World Wide Web page.