Search (3 results, page 1 of 1)

  • × author_ss:"Chang, S.-F."
  1. Chang, S.-F.; Smith, J.R.; Meng, J.: Efficient techniques for feature-based image / video access and manipulations (1997) 0.03
    0.027656192 = product of:
      0.06914048 = sum of:
        0.009535614 = weight(_text_:a in 756) [ClassicSimilarity], result of:
          0.009535614 = score(doc=756,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 756, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=756)
        0.05960487 = sum of:
          0.015628971 = weight(_text_:information in 756) [ClassicSimilarity], result of:
            0.015628971 = score(doc=756,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.1920054 = fieldWeight in 756, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=756)
          0.043975897 = weight(_text_:22 in 756) [ClassicSimilarity], result of:
            0.043975897 = score(doc=756,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.2708308 = fieldWeight in 756, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=756)
      0.4 = coord(2/5)
    
    Abstract
    Describes 2 research projects aimed at studying the parallel issues of image and video indexing, information retrieval and manipulation: VisualSEEK, a content based image query system and a Java based WWW application supporting localised colour and spatial similarity retrieval; and CVEPS (Compressed Video Editing and Parsing System) which supports video manipulation with indexing support of individual frames from VisualSEEK and a hierarchical new video browsing and indexing system. In both media forms, these systems address the problem of heterogeneous unconstrained collections
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Department of Library and Information Science
    Type
    a
  2. Jörgensen, C.; Jaimes, A.; Benitez, A.B.; Chang, S.-F.: ¬A conceptual framework and empirical research for classifying visual descriptors (2001) 0.01
    0.0067507178 = product of:
      0.016876794 = sum of:
        0.01129502 = weight(_text_:a in 6532) [ClassicSimilarity], result of:
          0.01129502 = score(doc=6532,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21126054 = fieldWeight in 6532, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6532)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 6532) [ClassicSimilarity], result of:
              0.011163551 = score(doc=6532,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 6532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6532)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents exploratory research evaluating a conceptual structure for the description of visual content of images. The structure, which was developed from empirical research in several fields (e.g., Computer Science, Psychology, Information Studies, etc.), classifies visual attributes into a "Pyramid" containing four syntactic levels (type/technique, global distribution, local structure, composition), and six semantic levels (generic, specific, and abstract levels of both object and scene, respectively). Various experiments are presented, which address the Pyramid's ability to achieve several tasks: (1) classification of terms describing image attributes generated in a formal and an informal description task, (2) classification of terms that result from a structured approach to indexing, and (3) guidance in the indexing process. Several descriptions, generated by naive users and indexers, are used in experiments that include two image collections: a random Web sample, and a set of news images. To test descriptions generated in a structured setting, an Image Indexing Template (developed independently over several years of this project by one of the authors) was also used. The experiments performed suggest that the Pyramid is conceptually robust (i.e., can accommodate a full range of attributes), and that it can be used to organize visual content for retrieval, to guide the indexing process, and to classify descriptions obtained manually and automatically
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.11, S.938-947
    Type
    a
  3. Benitez, A.B.; Zhong, D.; Chang, S.-F.: Enabling MPEG-7 structural and semantic descriptions in retrieval applications (2007) 0.01
    0.0055105956 = product of:
      0.013776489 = sum of:
        0.007078358 = weight(_text_:a in 518) [ClassicSimilarity], result of:
          0.007078358 = score(doc=518,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 518, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=518)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 518) [ClassicSimilarity], result of:
              0.013396261 = score(doc=518,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 518, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=518)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The MPEG-7 standard supports the description of both the structure and the semantics of multimedia; however, the generation and consumption of MPEG-7 structural and semantic descriptions are outside the scope of the standard. This article presents two research prototype systems that demonstrate the generation and consumption of MPEG-7 structural and semantic descriptions in retrieval applications. The active system for MPEG-4 video object simulation (AMOS) is a video object segmentation and retrieval system that segments, tracks, and models objects in videos (e.g., person, car) as a set of regions with corresponding visual features and spatiotemporal relations. The region-based model provides an effective base for similarity retrieval of video objects. The second system, the Intelligent Multimedia Knowledge Application (IMKA), uses the novel MediaNet framework for representing semantic and perceptual information about the world using multimedia. MediaNet knowledge bases can be constructed automatically from annotated collections of multimedia data and used to enhance the retrieval of multimedia.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.9, S.1377-1380
    Type
    a