Search (47 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.01
    0.0057485774 = product of:
      0.057485774 = sum of:
        0.02648922 = weight(_text_:und in 6830) [ClassicSimilarity], result of:
          0.02648922 = score(doc=6830,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.55409175 = fieldWeight in 6830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
        0.02648922 = weight(_text_:und in 6830) [ClassicSimilarity], result of:
          0.02648922 = score(doc=6830,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.55409175 = fieldWeight in 6830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
        0.004507334 = weight(_text_:s in 6830) [ClassicSimilarity], result of:
          0.004507334 = score(doc=6830,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.19219826 = fieldWeight in 6830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
      0.1 = coord(3/30)
    
    Source
    Sprache und Datenverarbeitung. 18(1994) H.1, S.3-15
  2. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.00509651 = product of:
      0.03822382 = sum of:
        0.01622127 = weight(_text_:und in 3430) [ClassicSimilarity], result of:
          0.01622127 = score(doc=3430,freq=6.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.33931053 = fieldWeight in 3430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.003527615 = weight(_text_:in in 3430) [ClassicSimilarity], result of:
          0.003527615 = score(doc=3430,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.01622127 = weight(_text_:und in 3430) [ClassicSimilarity], result of:
          0.01622127 = score(doc=3430,freq=6.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.33931053 = fieldWeight in 3430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.002253667 = weight(_text_:s in 3430) [ClassicSimilarity], result of:
          0.002253667 = score(doc=3430,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
      0.13333334 = coord(4/30)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
  3. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.00
    0.003147656 = product of:
      0.03147656 = sum of:
        0.014048031 = weight(_text_:und in 7292) [ClassicSimilarity], result of:
          0.014048031 = score(doc=7292,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.014048031 = weight(_text_:und in 7292) [ClassicSimilarity], result of:
          0.014048031 = score(doc=7292,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.0033805002 = weight(_text_:s in 7292) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=7292,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
      0.1 = coord(3/30)
    
    Pages
    S.2-13
  4. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    0.0028039606 = product of:
      0.021029703 = sum of:
        0.0070240153 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=2203,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0052914224 = weight(_text_:in in 2203) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=2203,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 2203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0070240153 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=2203,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0016902501 = weight(_text_:s in 2203) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=2203,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.13333334 = coord(4/30)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.348-369
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0018932101 = product of:
      0.0189321 = sum of:
        0.004988801 = weight(_text_:in in 5830) [ClassicSimilarity], result of:
          0.004988801 = score(doc=5830,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.002253667 = weight(_text_:s in 5830) [ClassicSimilarity], result of:
          0.002253667 = score(doc=5830,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.023379264 = score(doc=5830,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Pages
    S.39-48
  6. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.00
    0.0016373465 = product of:
      0.016373465 = sum of:
        0.005915991 = weight(_text_:in in 6525) [ClassicSimilarity], result of:
          0.005915991 = score(doc=6525,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20163295 = fieldWeight in 6525, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.0016902501 = weight(_text_:s in 6525) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=6525,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 6525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.008767224 = product of:
          0.017534448 = sum of:
            0.017534448 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.017534448 = score(doc=6525,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  7. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.00
    0.0010244418 = product of:
      0.015366627 = sum of:
        0.012549544 = product of:
          0.03764863 = sum of:
            0.03764863 = weight(_text_:l in 6320) [ClassicSimilarity], result of:
              0.03764863 = score(doc=6320,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.4391412 = fieldWeight in 6320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6320)
          0.33333334 = coord(1/3)
        0.0028170836 = weight(_text_:s in 6320) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=6320,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 6320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=6320)
      0.06666667 = coord(2/30)
    
    Pages
    S.275-284
  8. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.00
    7.708376E-4 = product of:
      0.0115625635 = sum of:
        0.00705523 = weight(_text_:in in 7510) [ClassicSimilarity], result of:
          0.00705523 = score(doc=7510,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.24046129 = fieldWeight in 7510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=7510)
        0.004507334 = weight(_text_:s in 7510) [ClassicSimilarity], result of:
          0.004507334 = score(doc=7510,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.19219826 = fieldWeight in 7510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=7510)
      0.06666667 = coord(2/30)
    
    Source
    Library and information science research. 12(1990) no.3, S.251-262
  9. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.00
    6.355139E-4 = product of:
      0.009532709 = sum of:
        0.00756075 = weight(_text_:in in 7296) [ClassicSimilarity], result of:
          0.00756075 = score(doc=7296,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2576908 = fieldWeight in 7296, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
        0.0019719584 = weight(_text_:s in 7296) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=7296,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.06666667 = coord(2/30)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  10. Nohr, H.: ¬The training of librarians in content analysis : some thoughts on future necessities (1991) 0.00
    6.2059314E-4 = product of:
      0.009308897 = sum of:
        0.00705523 = weight(_text_:in in 5149) [ClassicSimilarity], result of:
          0.00705523 = score(doc=5149,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.24046129 = fieldWeight in 5149, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5149)
        0.002253667 = weight(_text_:s in 5149) [ClassicSimilarity], result of:
          0.002253667 = score(doc=5149,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 5149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=5149)
      0.06666667 = coord(2/30)
    
    Abstract
    The training of librarians in content analysis undergoes influences resulting both from the realities existing in the various application fields and from technological innovations. The present contribution attempts to identify components of such training that are necessary for a future-oriented instruction, and it stresses the importance of furnishing a sound theoretical basis, especially in the light of technological developments. Purpose of the training is to provide the foundation for 'action competence' on the part of the students
    Source
    International classification. 18(1991) no.3, S.153-157
  11. Naves, M.M.L.: Analise de assunto : concepcoes (1996) 0.00
    6.0353905E-4 = product of:
      0.009053085 = sum of:
        0.006236001 = weight(_text_:in in 607) [ClassicSimilarity], result of:
          0.006236001 = score(doc=607,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
        0.0028170836 = weight(_text_:s in 607) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=607,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
      0.06666667 = coord(2/30)
    
    Abstract
    Discusses subject analysis as an important stage in the indexing process and observes confusions that can occur in the meaning of the term. Considers questions and difficulties about subject analysis and the concept of aboutness
    Source
    Revista de Biblioteconomia de Brasilia. 20(1996) no.2, S.215-226
  12. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.00
    5.9159653E-4 = product of:
      0.008873948 = sum of:
        0.006901989 = weight(_text_:in in 5844) [ClassicSimilarity], result of:
          0.006901989 = score(doc=5844,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23523843 = fieldWeight in 5844, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
        0.0019719584 = weight(_text_:s in 5844) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5844,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.06666667 = coord(2/30)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types
    Source
    Knowledge organization. 24(1997) no.2, S.64-71
  13. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.00
    5.781282E-4 = product of:
      0.008671923 = sum of:
        0.0052914224 = weight(_text_:in in 822) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=822,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=822)
        0.0033805002 = weight(_text_:s in 822) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=822,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=822)
      0.06666667 = coord(2/30)
    
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
  14. Smith, P.J.; Normore, L.F.; Denning, R.; Johnson, W.P.: Computerized tools to support document analysis (1994) 0.00
    5.781282E-4 = product of:
      0.008671923 = sum of:
        0.0052914224 = weight(_text_:in in 2990) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=2990,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 2990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=2990)
        0.0033805002 = weight(_text_:s in 2990) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=2990,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 2990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=2990)
      0.06666667 = coord(2/30)
    
    Pages
    S.221-237
    Source
    Challenges in indexing electronic text and images. Ed.: R. Fidel et al
  15. Dooley, J.M.: Subject indexing in context : subject cataloging of MARC AMC format archical records (1992) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 2199) [ClassicSimilarity], result of:
          0.006110009 = score(doc=2199,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 2199, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2199)
        0.002253667 = weight(_text_:s in 2199) [ClassicSimilarity], result of:
          0.002253667 = score(doc=2199,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 2199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2199)
      0.06666667 = coord(2/30)
    
    Abstract
    Integration of archival materials catalogued in the USMARC AMC format into online catalogues has given a new urgency to the need for direct subject access. Offers a broad definition of the concepts to be considered under the subject access heading, including not only topical subjects but also proper names, forms of material, time periods, geographic places, occupations, and functions. It is both necessary and possible to provide more consistent subject access to archives and manuscripts than currently is being achieved. Describes current efforts that are under way in the profession to address this need
    Source
    American archivist. 55(1992), S.344-354
  16. Wilkinson, C.L.: Intellectual level as a search enhancement in the online environment : summation and implications (1990) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 479) [ClassicSimilarity], result of:
          0.006110009 = score(doc=479,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 479, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=479)
        0.002253667 = weight(_text_:s in 479) [ClassicSimilarity], result of:
          0.002253667 = score(doc=479,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=479)
      0.06666667 = coord(2/30)
    
    Abstract
    This paper summarizes the papers presented by the members of the panel on "The Concept of Intellectual Level in Cataloging and Classification." The implication of adding intellectual level to the MARC record and creating intellectual level indexes in online catalogs are discussed. Conclusion is reached that providing intellectual level will not only be costly but may perhaps even be a disservice to library users.
    Source
    Cataloging and classification quarterly. 11(1990) no.1, S.89-97
  17. Svenonius, E.; McGarry, D.: Objectivity in evaluating subject heading assignment (1993) 0.00
    5.43019E-4 = product of:
      0.008145284 = sum of:
        0.0061733257 = weight(_text_:in in 5612) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=5612,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 5612, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5612)
        0.0019719584 = weight(_text_:s in 5612) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5612,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5612)
      0.06666667 = coord(2/30)
    
    Abstract
    Recent papers have called attention to discrepancies in the assignment of LCSH. While philosophical arguments can be made that subject analysis, if not a logical impossibility, at least is point-of-view dependent, subject headings continue to be assigned and continue to be useful. The hypothesis advanced in the present project is that to a considerable degree there is a clear-cut right and wrong to LCSH subject heading assignment. To test the hypothesis, it was postulated that the assignment of a subject heading is correct if it is supported by textual warrant (at least 20% of the book being cataloged is on the topic) and is constructed in accordance with the LoC Subject Cataloging Manual: Subject Headings. A sample of 100 books on scientific subjects was used to test the hypothesis
    Source
    Cataloging and classification quarterly. 16(1993) no.2, S.5-40
  18. Solomon, P.: Access to fiction for children : a user-based assessment of options and opportunities (1997) 0.00
    5.43019E-4 = product of:
      0.008145284 = sum of:
        0.0061733257 = weight(_text_:in in 5845) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=5845,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 5845, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
        0.0019719584 = weight(_text_:s in 5845) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=5845,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 5845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
      0.06666667 = coord(2/30)
    
    Abstract
    Reports on a study of children's intentions, purposes, search terms, strategies, successes and breakdowns in accessing fiction. Data was gathered using naturalistic methods of persistent, intensive observation and questioning with children in several school library media centres in the USA, including 997 OPAC transactions. Analyzes the data and highlights aspects of the broader context of the system which may help in development of mechanisms for electronic access
    Source
    Information services and use. 17(1997) nos.2/3, S.139-146
  19. Chu, C.M.; O'Brien, A.: Subject analysis : the critical first stage in indexing (1993) 0.00
    5.0708273E-4 = product of:
      0.007606241 = sum of:
        0.005915991 = weight(_text_:in in 6472) [ClassicSimilarity], result of:
          0.005915991 = score(doc=6472,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20163295 = fieldWeight in 6472, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6472)
        0.0016902501 = weight(_text_:s in 6472) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=6472,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 6472, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=6472)
      0.06666667 = coord(2/30)
    
    Abstract
    Studies of indexing neglect the first stage of the process, that is, subject analysis. In this study, novice indexers were asked to analyse three short, popular journal articles; to express the general subject as well as the primary and secondary topics in natural laguage statements; to state what influenced the analysis and to comment on the ease or difficulty of this process. The factors which influenced the process were: the subject discipline concerned, factual vs. subjective nature of the text, complexity of the subject, clarity of text, possible support offered by bibliographic apparatus such as title, etc. The findings showed that with the social science and science texts, the general subject could be determined with ease, while this was more difficult with the humanities text. Clear evidence emerged of the importance of bibliographical apparatus in defining the general subject. There was varying difficulty in determining the primary and secondarx topics
    Source
    Journal of information science. 19(1993), S.439-454
  20. Molina, M.P.: Interdisciplinary approaches to the concept and practice of written documentary content analysis (WTDCA) (1994) 0.00
    4.8788113E-4 = product of:
      0.0073182164 = sum of:
        0.0053462577 = weight(_text_:in in 6147) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=6147,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 6147, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6147)
        0.0019719584 = weight(_text_:s in 6147) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=6147,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 6147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6147)
      0.06666667 = coord(2/30)
    
    Abstract
    Content analysis, restricted within the limits of written textual documents (WTDCA), is a field which is greatly in need of extensive interdisciplinary research. This would clarify certain concepts, especially those concerned with 'text', as a new central nucleus of semiotic research, and 'content', or the informative power of text. The objective reality (syntax) of the written document should be, in the cognitve process that all content analysis entails, interpreted (semantically and pragmatically) in an intersubjective manner with regard to the context, the analyst's knowledge base and the documentary objectives. The contributions of sociolinguistics (textual), logic (formal) and psychology (cognitive) are fundamental to the conduct of these activities. The criteria used to validate the results obtained complete the necessary conceptual reference panorama
    Source
    Journal of documentation. 50(1994) no.2, S.111-133