Search (40 results, page 1 of 2)

  • × year_i:[1990 TO 2000}
  • × theme_ss:"Inhaltsanalyse"
  1. Nohr, H.: ¬The training of librarians in content analysis : some thoughts on future necessities (1991) 0.02
    0.023284636 = product of:
      0.06985391 = sum of:
        0.014278769 = weight(_text_:in in 5149) [ClassicSimilarity], result of:
          0.014278769 = score(doc=5149,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 5149, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5149)
        0.05557514 = product of:
          0.11115028 = sum of:
            0.11115028 = weight(_text_:ausbildung in 5149) [ClassicSimilarity], result of:
              0.11115028 = score(doc=5149,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.47439498 = fieldWeight in 5149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5149)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The training of librarians in content analysis undergoes influences resulting both from the realities existing in the various application fields and from technological innovations. The present contribution attempts to identify components of such training that are necessary for a future-oriented instruction, and it stresses the importance of furnishing a sound theoretical basis, especially in the light of technological developments. Purpose of the training is to provide the foundation for 'action competence' on the part of the students
    Theme
    Ausbildung
  2. Langridge, D.W.: Inhaltsanalyse: Grundlagen und Methoden (1994) 0.02
    0.017322265 = product of:
      0.051966794 = sum of:
        0.010929906 = weight(_text_:in in 3923) [ClassicSimilarity], result of:
          0.010929906 = score(doc=3923,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 3923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.04103689 = weight(_text_:und in 3923) [ClassicSimilarity], result of:
          0.04103689 = score(doc=3923,freq=24.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.42413816 = fieldWeight in 3923, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.33333334 = coord(2/6)
    
    Abstract
    Nahezu unbeachtet in der gesamten bibliothekswissenschaftlichen Literatur blieb bislang ein Arbeitsschritt der Inhaltserschließung, obgleich er ihren Ausgangspunkt bildet und damit von grundlegender Bedeutung ist: die Analyse des Inhalts von Dokumenten. Klare Aussagen über den Inhalt und die Beschaffenheit eines Dokuments auf der Basis fundierter Kriterien treffen zu können, ist das Anliegen von Langridges, nunmehr erstmalig in deutscher Übersetzung vorliegendem Werk. Gestützt auf ein Fundament philosophisch abgeleiteter Wissensformen, zeigt Langridge anhand einer Vielzahl konkreter Beispiele Wege - aber auch Irrwege- auf, sich dem Inhalt von Dokumenten zu nähern und zu Resultaten zu gelangen, die dank objektiver Kriterien überprüfbar werden. Er wendet sich damit sowohl an Studenten dieses Fachgebiets, indem er die grundlegenden Strukturen von Wissen aufzeigt, als auch an erfahrene Praktiker, die ihre Arbeit effektiver gestalten und ihre Entscheidungen auf eine Basis stellen möchten, die persönliche Einschätzungen objektiviert.
    Classification
    AN 95550 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Informationswissenschaft / Informationspraxis / Sacherschließung / Verfahren
    AN 75000 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Bibliothekswesen / Sacherschließung in Bibliotheken / Allgemeines
    AN 95100 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Informationswissenschaft / Informationspraxis / Referieren, Klassifizieren, Indexieren
    Content
    Inhalt: Inhaltsanalyse - Ziel und Zweck / Wissensformen / Themen / Dokumentenformen / Inhaltsverdichtung / Inhaltsverdichtung in praktischen Beispielen / Wissensstrukturen in Ordnungssystemen / Tiefenanalyse
    RVK
    AN 95550 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Informationswissenschaft / Informationspraxis / Sacherschließung / Verfahren
    AN 75000 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Bibliothekswesen / Sacherschließung in Bibliotheken / Allgemeines
    AN 95100 Allgemeines / Buch- und Bibliothekswesen, Informationswissenschaft / Informationswissenschaft / Informationspraxis / Referieren, Klassifizieren, Indexieren
  3. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.013322966 = product of:
      0.039968897 = sum of:
        0.0071393843 = weight(_text_:in in 3430) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=3430,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.032829512 = weight(_text_:und in 3430) [ClassicSimilarity], result of:
          0.032829512 = score(doc=3430,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.33931053 = fieldWeight in 3430, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.011251582 = product of:
      0.033754744 = sum of:
        0.010096614 = weight(_text_:in in 5830) [ClassicSimilarity], result of:
          0.010096614 = score(doc=5830,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04731626 = score(doc=5830,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  5. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.00990557 = product of:
      0.02971671 = sum of:
        0.011973113 = weight(_text_:in in 6525) [ClassicSimilarity], result of:
          0.011973113 = score(doc=6525,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 6525, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.035487194 = score(doc=6525,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  6. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.01
    0.008935061 = product of:
      0.053610366 = sum of:
        0.053610366 = weight(_text_:und in 6830) [ClassicSimilarity], result of:
          0.053610366 = score(doc=6830,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.55409175 = fieldWeight in 6830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=6830)
      0.16666667 = coord(1/6)
    
    Source
    Sprache und Datenverarbeitung. 18(1994) H.1, S.3-15
  7. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.008308224 = product of:
      0.024924671 = sum of:
        0.010709076 = weight(_text_:in in 2203) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2203,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.014215595 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.014215595 = score(doc=2203,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.33333334 = coord(2/6)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.00
    0.004738532 = product of:
      0.02843119 = sum of:
        0.02843119 = weight(_text_:und in 7292) [ClassicSimilarity], result of:
          0.02843119 = score(doc=7292,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.29385152 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
      0.16666667 = coord(1/6)
    
  9. Mayring, P.: Qualitative Inhaltsanalyse : Grundlagen und Techniken (1990) 0.00
    0.003948777 = product of:
      0.02369266 = sum of:
        0.02369266 = weight(_text_:und in 34) [ClassicSimilarity], result of:
          0.02369266 = score(doc=34,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.24487628 = fieldWeight in 34, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=34)
      0.16666667 = coord(1/6)
    
  10. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 7296) [ClassicSimilarity], result of:
          0.015301868 = score(doc=7296,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 7296, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.16666667 = coord(1/6)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
  11. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 7510) [ClassicSimilarity], result of:
          0.014278769 = score(doc=7510,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 7510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=7510)
      0.16666667 = coord(1/6)
    
  12. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.00
    0.0023281053 = product of:
      0.013968632 = sum of:
        0.013968632 = weight(_text_:in in 5844) [ClassicSimilarity], result of:
          0.013968632 = score(doc=5844,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23523843 = fieldWeight in 5844, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.16666667 = coord(1/6)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types
  13. Naves, M.M.L.: Analise de assunto : concepcoes (1996) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 607) [ClassicSimilarity], result of:
          0.012620768 = score(doc=607,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses subject analysis as an important stage in the indexing process and observes confusions that can occur in the meaning of the term. Considers questions and difficulties about subject analysis and the concept of aboutness
  14. Svenonius, E.; McGarry, D.: Objectivity in evaluating subject heading assignment (1993) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 5612) [ClassicSimilarity], result of:
          0.012493922 = score(doc=5612,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 5612, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5612)
      0.16666667 = coord(1/6)
    
    Abstract
    Recent papers have called attention to discrepancies in the assignment of LCSH. While philosophical arguments can be made that subject analysis, if not a logical impossibility, at least is point-of-view dependent, subject headings continue to be assigned and continue to be useful. The hypothesis advanced in the present project is that to a considerable degree there is a clear-cut right and wrong to LCSH subject heading assignment. To test the hypothesis, it was postulated that the assignment of a subject heading is correct if it is supported by textual warrant (at least 20% of the book being cataloged is on the topic) and is constructed in accordance with the LoC Subject Cataloging Manual: Subject Headings. A sample of 100 books on scientific subjects was used to test the hypothesis
  15. Hjoerland, B.: Subject representation and information seeking : contributions to a theory based on the theory of knowledge (1993) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 7555) [ClassicSimilarity], result of:
          0.012493922 = score(doc=7555,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 7555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=7555)
      0.16666667 = coord(1/6)
    
    Footnote
    [Dissertation]. - Zusammenfassung in: Knowledge organization 21(1994) no.2, S.94-98
  16. Solomon, P.: Access to fiction for children : a user-based assessment of options and opportunities (1997) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 5845) [ClassicSimilarity], result of:
          0.012493922 = score(doc=5845,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 5845, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports on a study of children's intentions, purposes, search terms, strategies, successes and breakdowns in accessing fiction. Data was gathered using naturalistic methods of persistent, intensive observation and questioning with children in several school library media centres in the USA, including 997 OPAC transactions. Analyzes the data and highlights aspects of the broader context of the system which may help in development of mechanisms for electronic access
  17. Dooley, J.M.: Subject indexing in context : subject cataloging of MARC AMC format archical records (1992) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 2199) [ClassicSimilarity], result of:
          0.012365777 = score(doc=2199,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 2199, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2199)
      0.16666667 = coord(1/6)
    
    Abstract
    Integration of archival materials catalogued in the USMARC AMC format into online catalogues has given a new urgency to the need for direct subject access. Offers a broad definition of the concepts to be considered under the subject access heading, including not only topical subjects but also proper names, forms of material, time periods, geographic places, occupations, and functions. It is both necessary and possible to provide more consistent subject access to archives and manuscripts than currently is being achieved. Describes current efforts that are under way in the profession to address this need
  18. Wilkinson, C.L.: Intellectual level as a search enhancement in the online environment : summation and implications (1990) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 479) [ClassicSimilarity], result of:
          0.012365777 = score(doc=479,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 479, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=479)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper summarizes the papers presented by the members of the panel on "The Concept of Intellectual Level in Cataloging and Classification." The implication of adding intellectual level to the MARC record and creating intellectual level indexes in online catalogs are discussed. Conclusion is reached that providing intellectual level will not only be costly but may perhaps even be a disservice to library users.
  19. Chu, C.M.; O'Brien, A.: Subject analysis : the critical first stage in indexing (1993) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 6472) [ClassicSimilarity], result of:
          0.011973113 = score(doc=6472,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 6472, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6472)
      0.16666667 = coord(1/6)
    
    Abstract
    Studies of indexing neglect the first stage of the process, that is, subject analysis. In this study, novice indexers were asked to analyse three short, popular journal articles; to express the general subject as well as the primary and secondary topics in natural laguage statements; to state what influenced the analysis and to comment on the ease or difficulty of this process. The factors which influenced the process were: the subject discipline concerned, factual vs. subjective nature of the text, complexity of the subject, clarity of text, possible support offered by bibliographic apparatus such as title, etc. The findings showed that with the social science and science texts, the general subject could be determined with ease, while this was more difficult with the humanities text. Clear evidence emerged of the importance of bibliographical apparatus in defining the general subject. There was varying difficulty in determining the primary and secondarx topics
  20. From information to knowledge : conceptual and content analysis by computer (1995) 0.00
    0.0019676082 = product of:
      0.011805649 = sum of:
        0.011805649 = weight(_text_:in in 5392) [ClassicSimilarity], result of:
          0.011805649 = score(doc=5392,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 5392, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.16666667 = coord(1/6)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
    Footnote
    Rez. in: Knowledge organization 23(1996) no.3, S.181-182 (O. Sechser)

Languages

  • e 34
  • d 6

Types

  • a 35
  • m 3
  • d 1
  • s 1
  • More… Less…