Search (41 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.035007086 = product of:
      0.07001417 = sum of:
        0.04620441 = weight(_text_:c in 5830) [ClassicSimilarity], result of:
          0.04620441 = score(doc=5830,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.023809763 = product of:
          0.047619525 = sum of:
            0.047619525 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.047619525 = score(doc=5830,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    5. 8.2006 13:22:08
  2. Schulzki-Haddouti, C.; Brückner, A.: ¬Die Suche nach dem Sinn : Automatische Inhaltsanalyse nicht nur für Geheimdienste (2001) 0.03
    0.034932848 = product of:
      0.069865696 = sum of:
        0.05775551 = weight(_text_:c in 3133) [ClassicSimilarity], result of:
          0.05775551 = score(doc=3133,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.381109 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
        0.012110183 = product of:
          0.024220366 = sum of:
            0.024220366 = weight(_text_:der in 3133) [ClassicSimilarity], result of:
              0.024220366 = score(doc=3133,freq=2.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.2467987 = fieldWeight in 3133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3133)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die Geheimdienste stehen vor einer Informationsflut, die sie mit herkömmlichen Mitteln nicht bewältigen können. Neue Möglichkeiten, die in Software-Projekten unter BND-Beteiligung entstanden, sollen das Defizit beseitigen und beliebig verknüpfte Informationen der analyse zugänglich machen
  3. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.03
    0.030190647 = product of:
      0.12076259 = sum of:
        0.12076259 = weight(_text_:van in 2926) [ClassicSimilarity], result of:
          0.12076259 = score(doc=2926,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.49290553 = fieldWeight in 2926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
      0.25 = coord(1/4)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  4. Knautz, K.; Dröge, E.; Finkelmeyer, S.; Guschauski, D.; Juchem, K.; Krzmyk, C.; Miskovic, D.; Schiefer, J.; Sen, E.; Verbina, J.; Werner, N.; Stock, W.G.: Indexieren von Emotionen bei Videos (2010) 0.03
    0.027602486 = product of:
      0.055204973 = sum of:
        0.03465331 = weight(_text_:c in 3637) [ClassicSimilarity], result of:
          0.03465331 = score(doc=3637,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 3637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.020551663 = product of:
          0.041103326 = sum of:
            0.041103326 = weight(_text_:der in 3637) [ClassicSimilarity], result of:
              0.041103326 = score(doc=3637,freq=16.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4188313 = fieldWeight in 3637, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3637)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Gegenstand der empirischen Forschungsarbeit sind dargestellte wie empfundene Gefühle bei Videos. Sind Nutzer in der Lage, solche Gefühle derart konsistent zu erschließen, dass man deren Angaben für ein emotionales Videoretrieval gebrauchen kann? Wir arbeiten mit einem kontrollierten Vokabular für neun tionen (Liebe, Freude, Spaß, Überraschung, Sehnsucht, Trauer, Ärger, Ekel und Angst), einem Schieberegler zur Einstellung der jeweiligen Intensität des Gefühls und mit dem Ansatz der broad Folksonomy, lassen also unterschiedliche Nutzer die Videos taggen. Versuchspersonen bekamen insgesamt 20 Videos (bearbeitete Filme aus YouTube) vorgelegt, deren Emotionen sie indexieren sollten. Wir erhielten Angaben von 776 Probanden und entsprechend 279.360 Schiebereglereinstellungen. Die Konsistenz der Nutzervoten ist sehr hoch; die Tags führen zu stabilen Verteilungen der Emotionen für die einzelnen Videos. Die endgültige Form der Verteilungen wird schon bei relativ wenigen Nutzern (unter 100) erreicht. Es ist möglich, im Sinne der Power Tags die jeweils für ein Dokument zentralen Gefühle (soweit überhaupt vorhanden) zu separieren und für das emotionale Information Retrieval (EmIR) aufzubereiten.
  5. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.02
    0.023102205 = product of:
      0.09240882 = sum of:
        0.09240882 = weight(_text_:c in 1346) [ClassicSimilarity], result of:
          0.09240882 = score(doc=1346,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.6097744 = fieldWeight in 1346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.125 = fieldNorm(doc=1346)
      0.25 = coord(1/4)
    
  6. Hicks, C.; Rush, J.; Strong, S.: Content analysis (1977) 0.02
    0.023102205 = product of:
      0.09240882 = sum of:
        0.09240882 = weight(_text_:c in 7514) [ClassicSimilarity], result of:
          0.09240882 = score(doc=7514,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.6097744 = fieldWeight in 7514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.125 = fieldNorm(doc=7514)
      0.25 = coord(1/4)
    
  7. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.02
    0.02021443 = product of:
      0.08085772 = sum of:
        0.08085772 = weight(_text_:c in 5270) [ClassicSimilarity], result of:
          0.08085772 = score(doc=5270,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.5335526 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
      0.25 = coord(1/4)
    
    Source
    Text analysis and computer. Ed.: C. Züll et al
  8. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.02
    0.017326655 = product of:
      0.06930662 = sum of:
        0.06930662 = weight(_text_:c in 822) [ClassicSimilarity], result of:
          0.06930662 = score(doc=822,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.45733082 = fieldWeight in 822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.09375 = fieldNorm(doc=822)
      0.25 = coord(1/4)
    
  9. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.02
    0.016748954 = product of:
      0.066995814 = sum of:
        0.066995814 = sum of:
          0.019376293 = weight(_text_:der in 251) [ClassicSimilarity], result of:
            0.019376293 = score(doc=251,freq=2.0), product of:
              0.098138146 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043933928 = queryNorm
              0.19743896 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
          0.047619525 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
            0.047619525 = score(doc=251,freq=2.0), product of:
              0.15384912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043933928 = queryNorm
              0.30952093 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
      0.25 = coord(1/4)
    
    Date
    22. 5.2021 12:43:05
  10. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.01
    0.010107215 = product of:
      0.04042886 = sum of:
        0.04042886 = weight(_text_:c in 5175) [ClassicSimilarity], result of:
          0.04042886 = score(doc=5175,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 5175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5175)
      0.25 = coord(1/4)
    
  11. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.010107215 = product of:
      0.04042886 = sum of:
        0.04042886 = weight(_text_:c in 3413) [ClassicSimilarity], result of:
          0.04042886 = score(doc=3413,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 3413, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
      0.25 = coord(1/4)
    
  12. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.01
    0.010107215 = product of:
      0.04042886 = sum of:
        0.04042886 = weight(_text_:c in 5844) [ClassicSimilarity], result of:
          0.04042886 = score(doc=5844,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 5844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.25 = coord(1/4)
    
  13. Clavier, V.; Paganelli, C.: Including authorial stance in the indexing of scientific documents (2012) 0.01
    0.008663327 = product of:
      0.03465331 = sum of:
        0.03465331 = weight(_text_:c in 320) [ClassicSimilarity], result of:
          0.03465331 = score(doc=320,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=320)
      0.25 = coord(1/4)
    
  14. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.01
    0.0074405507 = product of:
      0.029762203 = sum of:
        0.029762203 = product of:
          0.059524406 = sum of:
            0.059524406 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.059524406 = score(doc=5835,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:22:44
  15. From information to knowledge : conceptual and content analysis by computer (1995) 0.01
    0.007219439 = product of:
      0.028877756 = sum of:
        0.028877756 = weight(_text_:c in 5392) [ClassicSimilarity], result of:
          0.028877756 = score(doc=5392,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.1905545 = fieldWeight in 5392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.25 = coord(1/4)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
  16. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.01
    0.007219439 = product of:
      0.028877756 = sum of:
        0.028877756 = weight(_text_:c in 2657) [ClassicSimilarity], result of:
          0.028877756 = score(doc=2657,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.1905545 = fieldWeight in 2657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2657)
      0.25 = coord(1/4)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
  17. Inskip, C.; MacFarlane, A.; Rafferty, P.: Meaning, communication, music : towards a revised communication model (2008) 0.01
    0.007219439 = product of:
      0.028877756 = sum of:
        0.028877756 = weight(_text_:c in 2347) [ClassicSimilarity], result of:
          0.028877756 = score(doc=2347,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.1905545 = fieldWeight in 2347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2347)
      0.25 = coord(1/4)
    
  18. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.006850554 = product of:
      0.027402217 = sum of:
        0.027402217 = product of:
          0.054804433 = sum of:
            0.054804433 = weight(_text_:der in 3430) [ClassicSimilarity], result of:
              0.054804433 = score(doc=3430,freq=16.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.5584417 = fieldWeight in 3430, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3430)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
  19. Mochmann. E.: Inhaltsanalyse in den Sozialwissenschaften (1985) 0.01
    0.005932754 = product of:
      0.023731016 = sum of:
        0.023731016 = product of:
          0.04746203 = sum of:
            0.04746203 = weight(_text_:der in 2924) [ClassicSimilarity], result of:
              0.04746203 = score(doc=2924,freq=12.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4836247 = fieldWeight in 2924, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die sozialwissenschaftliche Inhaltsanalyse ist ein etabliertes Verfahren zur Datengewinnung aus Kommunikationsinhalten und zu ihrer Analyse. Forschungsdesigns lassen sich festmachen an Lasswell's klassischer Frage "Who says what to whom, how and with what effect?" Neben die traditionellen Verfahren der Vercodung von Kommunikationsinhalten durch menschliche Verschlüsseler treten seit Mitte der 60er Jahre computerunterstützte Inhaltsanalyseverfahren. Die Grundprinzipien der Inhaltserschließung auf Basis von "Inhaltsanalysewörterbüchern" (General Inquirer) und auf der Basis von statistischen Assoziationsverfahren (WORDS) werden erläutert. Möglichkeiten der Beobachtung gesellschaftlicher Entwicklungen auf Basis von "Textindikatoren" werden an Beispielen aus der Analyse von Tageszeitungen, Kleingruppendiskussionen und Parteiprogrammen vorgestellt
  20. Volpers, H.: Inhaltsanalyse (2013) 0.01
    0.005607093 = product of:
      0.022428373 = sum of:
        0.022428373 = product of:
          0.044856746 = sum of:
            0.044856746 = weight(_text_:der in 1018) [ClassicSimilarity], result of:
              0.044856746 = score(doc=1018,freq=14.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4570776 = fieldWeight in 1018, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1018)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Der Begriff Inhaltsanalyse wird je nach wissenschaftlicher Provenienz oder Bedeutungszusammenhang unterschiedlich definiert: Innerhalb der bibliothekarischen Praxis wird die Erfassung des Inhaltes eines vorliegenden Dokumentes für die Zwecke der Indexierung als Inhaltsanalyse bezeichnet, philologische Textinterpretationen oder sprachwissenschaftliche Textanalysen werden gelegentlich als Inhaltsanalysen etikettiert, ebenso die Interpretation von Interviewaussagen in der Psychologie und qualitativen Sozialforschung. Der vorliegende Beitrag bezieht sich explizit auf die sozialwissenschaftliche Methode der systematischen Inhaltsanalyse. Allerdings ist auch durch diese Eingrenzung noch keine hinreichende definitorische Klarheit geschaffen, da eine Unterscheidung in qualitative und quantitative Verfahren vorzunehmen ist.
    Source
    Handbuch Methoden der Bibliotheks- und Informationswissenschaft: Bibliotheks-, Benutzerforschung, Informationsanalyse. Hrsg.: K. Umlauf, S. Fühles-Ubach u. M.S. Seadle

Languages

  • e 21
  • d 20

Types

  • a 26
  • m 8
  • x 5
  • el 2
  • d 1
  • s 1
  • More… Less…