Search (45 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  • × year_i:[1990 TO 2000}
  1. Taylor, S.L.: Integrating natural language understanding with document structure analysis (1994) 0.02
    0.017406443 = product of:
      0.078329 = sum of:
        0.05872617 = weight(_text_:applications in 1794) [ClassicSimilarity], result of:
          0.05872617 = score(doc=1794,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 1794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1794)
        0.01960283 = weight(_text_:of in 1794) [ClassicSimilarity], result of:
          0.01960283 = score(doc=1794,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 1794, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1794)
      0.22222222 = coord(2/9)
    
    Abstract
    Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. Develops a prototype of an Intelligent Document Understanding System (IDUS) which employs several technologies: image processing, optical character recognition, document structure analysis and text understanding in a cooperative fashion. Discusses those areas of research during development of IDUS where it is found that the most benefit from the integration of natural language processing and image processing occured: document structure analysis, OCR correction, and text analysis. Discusses 2 applications which are supported by IDUS: text retrieval and automatic generation of hypertext links
  2. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.02
    0.017406443 = product of:
      0.078329 = sum of:
        0.05872617 = weight(_text_:applications in 5844) [ClassicSimilarity], result of:
          0.05872617 = score(doc=5844,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 5844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
        0.01960283 = weight(_text_:of in 5844) [ClassicSimilarity], result of:
          0.01960283 = score(doc=5844,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 5844, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.22222222 = coord(2/9)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types
  3. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.02
    0.016011083 = product of:
      0.07204988 = sum of:
        0.014818345 = weight(_text_:of in 5270) [ClassicSimilarity], result of:
          0.014818345 = score(doc=5270,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
        0.057231534 = weight(_text_:systems in 5270) [ClassicSimilarity], result of:
          0.057231534 = score(doc=5270,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.47535738 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
      0.22222222 = coord(2/9)
    
  4. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.02
    0.015370399 = product of:
      0.069166794 = sum of:
        0.01960283 = weight(_text_:of in 5175) [ClassicSimilarity], result of:
          0.01960283 = score(doc=5175,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 5175, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5175)
        0.049563963 = weight(_text_:systems in 5175) [ClassicSimilarity], result of:
          0.049563963 = score(doc=5175,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41167158 = fieldWeight in 5175, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5175)
      0.22222222 = coord(2/9)
    
    Abstract
    Recent research investigated image attributes as reported by participants in describing, sorting, and searching tasks with images and defined 46 specific image attributes which were then organized into 12 major classes. Attributes were also grouped as being 'perceptual' (directly stimulated by visual percepts), 'interpretive' (requiring inference from visual percepts), and 'reactive' (cognitive and affective responses to the images). This research describes the coverage of two image indexing and classification systems and one general classification system in relation to the previous findings and analyzes the extent to which components of these systems are capable of describing the range of image attributes as revealed by the previous research
    Source
    Knowledge organization and change: Proceedings of the Fourth International ISKO Conference, 15-18 July 1996, Library of Congress, Washington, DC. Ed.: R. Green
  5. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.013026112 = product of:
      0.058617502 = sum of:
        0.018148692 = weight(_text_:of in 3413) [ClassicSimilarity], result of:
          0.018148692 = score(doc=3413,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 3413, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
        0.04046881 = weight(_text_:systems in 3413) [ClassicSimilarity], result of:
          0.04046881 = score(doc=3413,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.33612844 = fieldWeight in 3413, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
      0.22222222 = coord(2/9)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
    Footnote
    Rez. in: Knowledge organization 21(1994) no.3, S.165-167 (W. Bies); JASIS 46(1995) no.5, S.389-390 (E.G. Bierbaum); Canadian journal of information and library science 20(1995) nos.3/4, S.52-53 (L. Rees-Potter)
  6. Bednarek, M.: Intellectual access to pictorial information (1993) 0.01
    0.009684435 = product of:
      0.04357996 = sum of:
        0.019052157 = weight(_text_:of in 5631) [ClassicSimilarity], result of:
          0.019052157 = score(doc=5631,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 5631, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.0245278 = weight(_text_:systems in 5631) [ClassicSimilarity], result of:
          0.0245278 = score(doc=5631,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 5631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
      0.22222222 = coord(2/9)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  7. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.008907516 = product of:
      0.04008382 = sum of:
        0.015556021 = weight(_text_:of in 2203) [ClassicSimilarity], result of:
          0.015556021 = score(doc=2203,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 2203, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0245278 = weight(_text_:systems in 2203) [ClassicSimilarity], result of:
          0.0245278 = score(doc=2203,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.22222222 = coord(2/9)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.348-369
  8. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.008481526 = product of:
      0.038166866 = sum of:
        0.016935252 = weight(_text_:of in 5830) [ClassicSimilarity], result of:
          0.016935252 = score(doc=5830,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.021231614 = product of:
          0.042463228 = sum of:
            0.042463228 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.042463228 = score(doc=5830,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.0075302785 = product of:
      0.033886254 = sum of:
        0.017962547 = weight(_text_:of in 6525) [ClassicSimilarity], result of:
          0.017962547 = score(doc=6525,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 6525, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.031847417 = score(doc=6525,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  10. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.01
    0.0074464604 = product of:
      0.033509072 = sum of:
        0.0089812735 = weight(_text_:of in 473) [ClassicSimilarity], result of:
          0.0089812735 = score(doc=473,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.14660224 = fieldWeight in 473, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
        0.0245278 = weight(_text_:systems in 473) [ClassicSimilarity], result of:
          0.0245278 = score(doc=473,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.22222222 = coord(2/9)
    
    Abstract
    In this study descriptions of a novel by 100 users at 2 Swedish public libraries, Malmö and Molndal, Mar-Apr 95, were compared to the index terms used for the novels at these libraries. Describes previous systems for fiction indexing, the 2 libraries, and the users interviewed. Compares the AMP system with their own model. The latter operates with terms under the headings phenomena, frame and author's intention. The similarities between the users' and indexers' descriptions were sufficiently close to make it possible to retrieve fiction in accordance with users' wishes in Molndal, and would have been in Malmö, had more books been indexed with more terms. Sometimes the similarities were close enough for users to retrieve fiction on their own
  11. Wyllie, J.: Concept indexing : the world beyond the windows (1990) 0.00
    0.0028225419 = product of:
      0.025402877 = sum of:
        0.025402877 = weight(_text_:of in 2977) [ClassicSimilarity], result of:
          0.025402877 = score(doc=2977,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41465378 = fieldWeight in 2977, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2977)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper argues that the realisation of the electronic hypermedia of the future depends on integrating the technology of free text retrieval with the classification-based discipline of content analysis
  12. Schlapfer, K.: ¬The information content of images (1995) 0.00
    0.0028225419 = product of:
      0.025402877 = sum of:
        0.025402877 = weight(_text_:of in 521) [ClassicSimilarity], result of:
          0.025402877 = score(doc=521,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41465378 = fieldWeight in 521, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=521)
      0.11111111 = coord(1/9)
    
    Abstract
    Reviews the methods of calculating the information content of images, with particular reference to the information content of printed and photographic images; and printed and television images
  13. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.00
    0.002661118 = product of:
      0.023950063 = sum of:
        0.023950063 = weight(_text_:of in 2926) [ClassicSimilarity], result of:
          0.023950063 = score(doc=2926,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 2926, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
      0.11111111 = coord(1/9)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  14. Hjoerland, B.: ¬The concept of 'subject' in information science (1992) 0.00
    0.0026402464 = product of:
      0.023762217 = sum of:
        0.023762217 = weight(_text_:of in 2247) [ClassicSimilarity], result of:
          0.023762217 = score(doc=2247,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.38787308 = fieldWeight in 2247, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2247)
      0.11111111 = coord(1/9)
    
    Abstract
    This article presents a theoretical investigation of the concept of 'subject' or 'subject matter' in library and information science. Most conceptions of 'subject' in the literature are not explicit but implicit. Various indexing and classification theories, including automatic indexing and citation indexing, have their own more or less implicit concepts of subject. This fact puts the emphasis on making the implicit theorie of 'subject matter' explicit as the first step. ... The different conceptions of 'subject' can therefore be classified into epistemological positions, e.g. 'subjective idealism' (or the empiric/positivistic viewpoint), 'objective idealism' (the rationalistic viewpoint), 'pragmatism' and 'materialism/realism'. The third and final step is to propose a new theory of subject matter based on an explicit theory of knowledge. In this article this is done from the point of view of a realistic/materialistic epistemology. From this standpoint the subject of a document is defined as the epistemological potentials of that document
    Source
    Journal of documentation. 48(1992), S.172-200
  15. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.00
    0.0026297483 = product of:
      0.023667734 = sum of:
        0.023667734 = weight(_text_:of in 4236) [ClassicSimilarity], result of:
          0.023667734 = score(doc=4236,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.38633084 = fieldWeight in 4236, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
      0.11111111 = coord(1/9)
    
    Abstract
    Presents an overview of some recent developments in the clause-based content analysis of linguistic data. Introduces network analysis of evaluative texts, for the analysis of cognitive maps, and linguistic content analysis. Focuses on the types of substantive inferences afforded by the three approaches
  16. Nohr, H.: ¬The training of librarians in content analysis : some thoughts on future necessities (1991) 0.00
    0.002489248 = product of:
      0.022403233 = sum of:
        0.022403233 = weight(_text_:of in 5149) [ClassicSimilarity], result of:
          0.022403233 = score(doc=5149,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36569026 = fieldWeight in 5149, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5149)
      0.11111111 = coord(1/9)
    
    Abstract
    The training of librarians in content analysis undergoes influences resulting both from the realities existing in the various application fields and from technological innovations. The present contribution attempts to identify components of such training that are necessary for a future-oriented instruction, and it stresses the importance of furnishing a sound theoretical basis, especially in the light of technological developments. Purpose of the training is to provide the foundation for 'action competence' on the part of the students
  17. Todd, R.J.: Subject access: what's it all about? : some research findings (1993) 0.00
    0.002489248 = product of:
      0.022403233 = sum of:
        0.022403233 = weight(_text_:of in 8193) [ClassicSimilarity], result of:
          0.022403233 = score(doc=8193,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36569026 = fieldWeight in 8193, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=8193)
      0.11111111 = coord(1/9)
    
    Abstract
    Describes some findings of research conducted into activities related to the process of deciding subjects of documents which sought to identify the goals and intentions of indexers in determining subjects; specific strategies and prescriptions indexers actually use to determine subjects; and some of the variables which impact on the process of determining subjects
    Footnote
    Paper presented at the 10th National Cataloguing Conference on Subject to change: subject access and the role of the cataloguer, Freemantle, Western Australia, 4-6 Nov 93
  18. Berinstein, P.: Moving multimedia : the information value in images (1997) 0.00
    0.002489248 = product of:
      0.022403233 = sum of:
        0.022403233 = weight(_text_:of in 2489) [ClassicSimilarity], result of:
          0.022403233 = score(doc=2489,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36569026 = fieldWeight in 2489, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
      0.11111111 = coord(1/9)
    
    Abstract
    Considers the role of pictures in information communication, comparing the way it conveys information with text. Categorises the purposes of images as conveyors of information: the instructional image, the documentary image, the location image, the graphical representation of numbers, the concepts image, the image making the unseen visible, the image as a surrogate for an object or document, the decorative image, the image as a statement, the strong image and the emotional image. Gives examples of how the value of images is being recognised and of how they can be used well
  19. Molina, M.P.: Interdisciplinary approaches to the concept and practice of written documentary content analysis (WTDCA) (1994) 0.00
    0.002469724 = product of:
      0.022227516 = sum of:
        0.022227516 = weight(_text_:of in 6147) [ClassicSimilarity], result of:
          0.022227516 = score(doc=6147,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 6147, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6147)
      0.11111111 = coord(1/9)
    
    Abstract
    Content analysis, restricted within the limits of written textual documents (WTDCA), is a field which is greatly in need of extensive interdisciplinary research. This would clarify certain concepts, especially those concerned with 'text', as a new central nucleus of semiotic research, and 'content', or the informative power of text. The objective reality (syntax) of the written document should be, in the cognitve process that all content analysis entails, interpreted (semantically and pragmatically) in an intersubjective manner with regard to the context, the analyst's knowledge base and the documentary objectives. The contributions of sociolinguistics (textual), logic (formal) and psychology (cognitive) are fundamental to the conduct of these activities. The criteria used to validate the results obtained complete the necessary conceptual reference panorama
    Source
    Journal of documentation. 50(1994) no.2, S.111-133
  20. Green, R.: ¬The role of relational structures in indexing for the humanities (1997) 0.00
    0.0023403284 = product of:
      0.021062955 = sum of:
        0.021062955 = weight(_text_:of in 474) [ClassicSimilarity], result of:
          0.021062955 = score(doc=474,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34381276 = fieldWeight in 474, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=474)
      0.11111111 = coord(1/9)
    
    Abstract
    The paper is divided into 3 parts. The 1st develops a framework for evaluating the indexing needs of the humanities with reference to 4 sets of contrasts: user (need)-oriented vs. document-oriented indexing; subject indexing vs. attribute indexing; scientific writing vs. humanistic writing; and topical relevance vs. logical relevance vs. evidential relevance vs. aesthetic relevance. The indexing needs for the humanities range broadly across these contrasts. The 2nd part establishes the centrality of relationships to the communication of indexable matter and examines the advantages and disadvantages of means used for their expression inboth natural languages and indexing languages. The use of relational structure, such as a frame, is shown to represent perhaps the best available option. The 3rd part illustrates where the use of relational structures in humanities indexing would help meet some of the needs previously identified. Although not a panacea, the adoption of frame-based indexing in the humanities might substantially improve the retrieval of its literature