Search (7 results, page 1 of 1)

  • × author_ss:"Turner, J.M."
  1. Turner, J.M.: Moving image indexing (2009) 0.02
    0.024130303 = product of:
      0.07239091 = sum of:
        0.07239091 = product of:
          0.14478181 = sum of:
            0.14478181 = weight(_text_:indexing in 3852) [ClassicSimilarity], result of:
              0.14478181 = score(doc=3852,freq=18.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.76126254 = fieldWeight in 3852, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3852)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Several types of moving images are available now, and each has its own indexing needs. In addition, a number of levels of indexing are necessary, depending on the type of image, the type of collection, and the needs of users. In information science, work on providing indexing to the various levels has largely to do with finding ways to recycle text created for other purposes in the processes of production, in order to point to individual shots, sequences, scenes, or chapters. Such text recycling needs to happen automatically, through the application of algorithms developed for this purpose, since indexing at the various levels by humans is prohibitively expensive in most circumstances. Multilingual indexing is an issue in the context of retrieving images in a networked environment. Another issue is access to moving images using indexing approaches other than subject indexing. Tagging of images by users is prevalent in the networked environment, and a discussion of its usefulness is included. Finally, there is some speculation on what the future of moving image indexing might bring.
  2. Turner, J.M.; Mathieu, S.: Audio description text for indexing films (2007) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 701) [ClassicSimilarity], result of:
              0.11260808 = score(doc=701,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 701, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Access to audiovisual materials should be as open and free as access to print-based materials. However, we have not yet achieved such a reality. Methods useful for organising print-based materials do not necessarily work well when applied to audiovisual and multimedia materials. In this project, we studied using audio description text and written descriptions to generate keywords for indexing moving images. We found that such sources are fruitful and helpful. In the second part of the study, we looked at the possibility of automatically translating keywords from audio description text into other languages to use them as indexing. Here again, the results are encouraging.
    Content
    Vortrag anlässlich: WORLD LIBRARY AND INFORMATION CONGRESS: 73RD IFLA GENERAL CONFERENCE AND COUNCIL 19-23 August 2007, Durban, South Africa. - 157 - Classification and Indexing
  3. Turner, J.M.: Cross-language transfer of indexing concepts for storage and retrieval of moving images : preliminary results (1996) 0.02
    0.016253578 = product of:
      0.04876073 = sum of:
        0.04876073 = product of:
          0.09752146 = sum of:
            0.09752146 = weight(_text_:indexing in 7400) [ClassicSimilarity], result of:
              0.09752146 = score(doc=7400,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5127677 = fieldWeight in 7400, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7400)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In previous research, participants who screen a videotape of stock footage from the National Film Board of Canada's stockshot collection were asked to assign terms in English that could be used for retrieval of each shot. The most popular terms were analyzed as potential indexing terms. In the current research a French language version of the research tapes was prepared, using the same images, and the data collected were in French. Compares the most popular terms identified in each of the 2 studies for each of the shots in order to determine the rate of correspondence between potential indexing terms in each language
  4. Turner, J.M.: From ABC to http : the effervescent evolution of indexing for audiovisual materials (2010) 0.02
    0.016253578 = product of:
      0.04876073 = sum of:
        0.04876073 = product of:
          0.09752146 = sum of:
            0.09752146 = weight(_text_:indexing in 3570) [ClassicSimilarity], result of:
              0.09752146 = score(doc=3570,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5127677 = fieldWeight in 3570, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3570)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Indexing methods for audiovisual materials had not yet settled when the arrival of the World Wide Web upset any stability that existed in this area. New possibilities have now opened up for indexing digital audiovisual materials in a networked environment. This article, traces some of the methods used for organizing collections of audiovisual materials, give a general portrait of how various types of them are organized today, and using indicators that have become manifest, speculate on some future developments in this area.
  5. Turner, J.M.; Colinet, E.: Using audio description for indexing moving images (2004) 0.01
    0.013931636 = product of:
      0.041794907 = sum of:
        0.041794907 = product of:
          0.083589815 = sum of:
            0.083589815 = weight(_text_:indexing in 3724) [ClassicSimilarity], result of:
              0.083589815 = score(doc=3724,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.4395151 = fieldWeight in 3724, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3724)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper includes some of the results of a study that looks at three types of text for automatically deriving shot-level indexing to moving images. Audio description is a voice added to the sound track of moving pictures to provide information for the visually impaired. We analyse two one-hour parts of a television production broadcast as a mini-series in 1997. We compare our results with those of a previous study, which identifies some of the characteristics of audio description and the associated moving image. We found close correspondence among some aspects studied and for other aspects muck less correspondence, but for reasons we are able to explain. In addition, in the process of conducting the current study we further developed our methodology and now feel that it is a mature method for analysing audio description text as a source for generating indexing to the associated moving image.
  6. Turner, J.M.: Comparing user-assigned terms with indexer-assigned terms for storage and retrieval of moving images : research results (1995) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 3831) [ClassicSimilarity], result of:
              0.048260607 = score(doc=3831,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 3831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3831)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Previous research on shot level indexing of moving image documents identified the terms supplied most often by participants to describe a selection of shots from the National Film Borard of Canada's stockshot collection. The most popular terms supplied by participants in the study were compared with the terms assigned by professional indexers for these shots in the source files. Records for some of the shots used in the original study came from the stockshot library's computer database, and the remaining records came from its older card file. Findings indicate agreement between the terms users think of when searching film and video shots and those indexers assign to them
  7. Hudon, M.; Turner, J.M.; Devin, Y.: How many terms are enough? : stability and dynamism in vocabulary management for moving image collections (2000) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 117) [ClassicSimilarity], result of:
              0.04021717 = score(doc=117,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Most moving image collections have existed for less than a century, and as we enter the new millennium we observe that the organisation of these collections is still characterized by ad hoc practices. An important stream of research in this area focuses on high-level access to images using methods from library and information science, and using text to create information useful for retrieval. It has been established that common names for objects seen in the image are the key to retrieval in such collections. On a day-to-day basis, those responsible for collection management build indexing vocabularies, creating terms as necessary, and often structuring them loosely into a thesaurus. Discussions with moving image collection librarians have led us to believe that there may be an optimal number of common names a thesaurus for managing general collections of moving images should contain, and that the terms may even be the same from one thesaurus to the next. In this paper, we describe the methodology adopted for studying this question, and report preliminary results