Search (26 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.04
    0.039216578 = sum of:
      0.017988488 = product of:
        0.07195395 = sum of:
          0.07195395 = weight(_text_:authors in 5589) [ClassicSimilarity], result of:
            0.07195395 = score(doc=5589,freq=2.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.30220953 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
        0.25 = coord(1/4)
      0.02122809 = product of:
        0.04245618 = sum of:
          0.04245618 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.04245618 = score(doc=5589,freq=2.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
        0.5 = coord(1/2)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.03
    0.028972944 = product of:
      0.05794589 = sum of:
        0.05794589 = product of:
          0.11589178 = sum of:
            0.11589178 = weight(_text_:b in 7510) [ClassicSimilarity], result of:
              0.11589178 = score(doc=7510,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.6263131 = fieldWeight in 7510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Hjoerland, B.: Subject representation and information seeking : contributions to a theory based on the theory of knowledge (1993) 0.03
    0.025351325 = product of:
      0.05070265 = sum of:
        0.05070265 = product of:
          0.1014053 = sum of:
            0.1014053 = weight(_text_:b in 7555) [ClassicSimilarity], result of:
              0.1014053 = score(doc=7555,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.54802394 = fieldWeight in 7555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7555)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.02
    0.021729708 = product of:
      0.043459415 = sum of:
        0.043459415 = product of:
          0.08691883 = sum of:
            0.08691883 = weight(_text_:b in 7292) [ClassicSimilarity], result of:
              0.08691883 = score(doc=7292,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.46973482 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.02
    0.01810809 = product of:
      0.03621618 = sum of:
        0.03621618 = product of:
          0.07243236 = sum of:
            0.07243236 = weight(_text_:b in 3549) [ClassicSimilarity], result of:
              0.07243236 = score(doc=3549,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.3914457 = fieldWeight in 3549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.017690076 = product of:
      0.03538015 = sum of:
        0.03538015 = product of:
          0.0707603 = sum of:
            0.0707603 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.0707603 = score(doc=5835,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
  7. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.01
    0.014486472 = product of:
      0.028972944 = sum of:
        0.028972944 = product of:
          0.05794589 = sum of:
            0.05794589 = weight(_text_:b in 4707) [ClassicSimilarity], result of:
              0.05794589 = score(doc=4707,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.31315655 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4707)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.01415206 = product of:
      0.02830412 = sum of:
        0.02830412 = product of:
          0.05660824 = sum of:
            0.05660824 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.05660824 = score(doc=5830,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
  9. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.01415206 = product of:
      0.02830412 = sum of:
        0.02830412 = product of:
          0.05660824 = sum of:
            0.05660824 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.05660824 = score(doc=251,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  10. Hjoerland, B.: Subject (of documents) (2016) 0.01
    0.012804354 = product of:
      0.025608707 = sum of:
        0.025608707 = product of:
          0.051217414 = sum of:
            0.051217414 = weight(_text_:b in 3182) [ClassicSimilarity], result of:
              0.051217414 = score(doc=3182,freq=4.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.2767939 = fieldWeight in 3182, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ISKO Encyclopedia of Knowledge Organization, ed. by B. Hjoerland. [http://www.isko.org/cyclo/logical_division]
  11. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.01
    0.0126756625 = product of:
      0.025351325 = sum of:
        0.025351325 = product of:
          0.05070265 = sum of:
            0.05070265 = weight(_text_:b in 6032) [ClassicSimilarity], result of:
              0.05070265 = score(doc=6032,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.27401197 = fieldWeight in 6032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6032)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Hjoerland, B.: Knowledge organization (KO) (2017) 0.01
    0.0126756625 = product of:
      0.025351325 = sum of:
        0.025351325 = product of:
          0.05070265 = sum of:
            0.05070265 = weight(_text_:b in 3418) [ClassicSimilarity], result of:
              0.05070265 = score(doc=3418,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.27401197 = fieldWeight in 3418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.012508772 = product of:
      0.025017545 = sum of:
        0.025017545 = product of:
          0.05003509 = sum of:
            0.05003509 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.05003509 = score(doc=4888,freq=4.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  14. Hjoerland, B.: ¬The concept of 'subject' in information science (1992) 0.01
    0.010864854 = product of:
      0.021729708 = sum of:
        0.021729708 = product of:
          0.043459415 = sum of:
            0.043459415 = weight(_text_:b in 2247) [ClassicSimilarity], result of:
              0.043459415 = score(doc=2247,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23486741 = fieldWeight in 2247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2247)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.010864854 = product of:
      0.021729708 = sum of:
        0.021729708 = product of:
          0.043459415 = sum of:
            0.043459415 = weight(_text_:b in 4471) [ClassicSimilarity], result of:
              0.043459415 = score(doc=4471,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23486741 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.010614045 = product of:
      0.02122809 = sum of:
        0.02122809 = product of:
          0.04245618 = sum of:
            0.04245618 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.04245618 = score(doc=6525,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  17. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.010614045 = product of:
      0.02122809 = sum of:
        0.02122809 = product of:
          0.04245618 = sum of:
            0.04245618 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.04245618 = score(doc=1416,freq=2.0), product of:
                0.18288986 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Shaw, R.: Information organization and the philosophy of history (2013) 0.01
    0.010493284 = product of:
      0.020986568 = sum of:
        0.020986568 = product of:
          0.08394627 = sum of:
            0.08394627 = weight(_text_:authors in 946) [ClassicSimilarity], result of:
              0.08394627 = score(doc=946,freq=2.0), product of:
                0.23809293 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052226946 = queryNorm
                0.35257778 = fieldWeight in 946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=946)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The philosophy of history can help articulate problems relevant to information organization. One such problem is "aboutness": How do texts relate to the world? In response to this problem, philosophers of history have developed theories of colligation describing how authors bind together phenomena under organizing concepts. Drawing on these ideas, I present a theory of subject analysis that avoids the problematic illusion of an independent "landscape" of subjects. This theory points to a broad vision of the future of information organization and some specific challenges to be met.
  19. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.01
    0.009054045 = product of:
      0.01810809 = sum of:
        0.01810809 = product of:
          0.03621618 = sum of:
            0.03621618 = weight(_text_:b in 2657) [ClassicSimilarity], result of:
              0.03621618 = score(doc=2657,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.19572285 = fieldWeight in 2657, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2657)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
  20. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.01
    0.009054045 = product of:
      0.01810809 = sum of:
        0.01810809 = product of:
          0.03621618 = sum of:
            0.03621618 = weight(_text_:b in 612) [ClassicSimilarity], result of:
              0.03621618 = score(doc=612,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.19572285 = fieldWeight in 612, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.