Search (43 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.07
    0.068657026 = product of:
      0.13731405 = sum of:
        0.13731405 = sum of:
          0.06648588 = weight(_text_:research in 5835) [ClassicSimilarity], result of:
            0.06648588 = score(doc=5835,freq=4.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.44577867 = fieldWeight in 5835, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.07082816 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.07082816 = score(doc=5835,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  2. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.06
    0.058563553 = product of:
      0.117127106 = sum of:
        0.117127106 = sum of:
          0.074630216 = weight(_text_:research in 5589) [ClassicSimilarity], result of:
            0.074630216 = score(doc=5589,freq=14.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.5003853 = fieldWeight in 5589, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.042496894 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.042496894 = score(doc=5589,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.5 = coord(1/2)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  3. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.05
    0.054925613 = product of:
      0.109851226 = sum of:
        0.109851226 = sum of:
          0.053188704 = weight(_text_:research in 5830) [ClassicSimilarity], result of:
            0.053188704 = score(doc=5830,freq=4.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.35662293 = fieldWeight in 5830, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.056662526 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.056662526 = score(doc=5830,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  4. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.04
    0.035352234 = product of:
      0.07070447 = sum of:
        0.07070447 = sum of:
          0.028207572 = weight(_text_:research in 1416) [ClassicSimilarity], result of:
            0.028207572 = score(doc=1416,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.18912788 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
          0.042496894 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
            0.042496894 = score(doc=1416,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.23214069 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  5. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.03
    0.026594352 = product of:
      0.053188704 = sum of:
        0.053188704 = product of:
          0.10637741 = sum of:
            0.10637741 = weight(_text_:research in 7510) [ClassicSimilarity], result of:
              0.10637741 = score(doc=7510,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.71324587 = fieldWeight in 7510, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library and information science research. 12(1990) no.3, S.251-262
  6. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.02
    0.02472801 = product of:
      0.04945602 = sum of:
        0.04945602 = sum of:
          0.028207572 = weight(_text_:research in 2293) [ClassicSimilarity], result of:
            0.028207572 = score(doc=2293,freq=8.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.18912788 = fieldWeight in 2293, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.021248447 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.021248447 = score(doc=2293,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  7. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.02
    0.019945765 = product of:
      0.03989153 = sum of:
        0.03989153 = product of:
          0.07978306 = sum of:
            0.07978306 = weight(_text_:research in 1329) [ClassicSimilarity], result of:
              0.07978306 = score(doc=1329,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5349344 = fieldWeight in 1329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  8. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.01
    0.014249943 = product of:
      0.028499886 = sum of:
        0.028499886 = product of:
          0.056999773 = sum of:
            0.056999773 = weight(_text_:research in 5175) [ClassicSimilarity], result of:
              0.056999773 = score(doc=5175,freq=6.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.38217562 = fieldWeight in 5175, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent research investigated image attributes as reported by participants in describing, sorting, and searching tasks with images and defined 46 specific image attributes which were then organized into 12 major classes. Attributes were also grouped as being 'perceptual' (directly stimulated by visual percepts), 'interpretive' (requiring inference from visual percepts), and 'reactive' (cognitive and affective responses to the images). This research describes the coverage of two image indexing and classification systems and one general classification system in relation to the previous findings and analyzes the extent to which components of these systems are capable of describing the range of image attributes as revealed by the previous research
  9. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.0141656315 = product of:
      0.028331263 = sum of:
        0.028331263 = product of:
          0.056662526 = sum of:
            0.056662526 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.056662526 = score(doc=251,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  10. Todd, R.J.: Subject access: what's it all about? : some research findings (1993) 0.01
    0.013297176 = product of:
      0.026594352 = sum of:
        0.026594352 = product of:
          0.053188704 = sum of:
            0.053188704 = weight(_text_:research in 8193) [ClassicSimilarity], result of:
              0.053188704 = score(doc=8193,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.35662293 = fieldWeight in 8193, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8193)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes some findings of research conducted into activities related to the process of deciding subjects of documents which sought to identify the goals and intentions of indexers in determining subjects; specific strategies and prescriptions indexers actually use to determine subjects; and some of the variables which impact on the process of determining subjects
  11. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.012520768 = product of:
      0.025041535 = sum of:
        0.025041535 = product of:
          0.05008307 = sum of:
            0.05008307 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.05008307 = score(doc=4888,freq=4.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  12. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.01
    0.012214238 = product of:
      0.024428476 = sum of:
        0.024428476 = product of:
          0.04885695 = sum of:
            0.04885695 = weight(_text_:research in 4444) [ClassicSimilarity], result of:
              0.04885695 = score(doc=4444,freq=6.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.3275791 = fieldWeight in 4444, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4444)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  13. Nahl-Jakobovits, D.; Jakobovits, L.A.: ¬A content analysis method for developing user-based objectives (1992) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 3015) [ClassicSimilarity], result of:
              0.047012623 = score(doc=3015,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 3015, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3015)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Research strategies. 10(1992) no.1, S.4-16
  14. Pejtersen, A.M.: ¬A new approach to the classification of fiction (1982) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 7240) [ClassicSimilarity], result of:
              0.047012623 = score(doc=7240,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 7240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7240)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification Research, Augsburg, 28.6.-2.7.1982. Ed. I. Dahlberg
  15. Molina, M.P.: Interdisciplinary approaches to the concept and practice of written documentary content analysis (WTDCA) (1994) 0.01
    0.01163503 = product of:
      0.02327006 = sum of:
        0.02327006 = product of:
          0.04654012 = sum of:
            0.04654012 = weight(_text_:research in 6147) [ClassicSimilarity], result of:
              0.04654012 = score(doc=6147,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31204507 = fieldWeight in 6147, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Content analysis, restricted within the limits of written textual documents (WTDCA), is a field which is greatly in need of extensive interdisciplinary research. This would clarify certain concepts, especially those concerned with 'text', as a new central nucleus of semiotic research, and 'content', or the informative power of text. The objective reality (syntax) of the written document should be, in the cognitve process that all content analysis entails, interpreted (semantically and pragmatically) in an intersubjective manner with regard to the context, the analyst's knowledge base and the documentary objectives. The contributions of sociolinguistics (textual), logic (formal) and psychology (cognitive) are fundamental to the conduct of these activities. The criteria used to validate the results obtained complete the necessary conceptual reference panorama
  16. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.010624223 = product of:
      0.021248447 = sum of:
        0.021248447 = product of:
          0.042496894 = sum of:
            0.042496894 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.042496894 = score(doc=6525,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  17. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.01
    0.0101785315 = product of:
      0.020357063 = sum of:
        0.020357063 = product of:
          0.040714126 = sum of:
            0.040714126 = weight(_text_:research in 612) [ClassicSimilarity], result of:
              0.040714126 = score(doc=612,freq=6.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2729826 = fieldWeight in 612, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
  18. Tibbo, H.R.: Abstracting across the disciplines : a content analysis of abstracts for the natural sciences, the social sciences, and the humanities with implications for abstracting standards and online information retrieval (1992) 0.01
    0.009402524 = product of:
      0.018805047 = sum of:
        0.018805047 = product of:
          0.037610095 = sum of:
            0.037610095 = weight(_text_:research in 2536) [ClassicSimilarity], result of:
              0.037610095 = score(doc=2536,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2521705 = fieldWeight in 2536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library and information science research. 14(1992) no.1, S.31-56
  19. Todd, R.J.: Academic indexing : what's it all about? (1992) 0.01
    0.009402524 = product of:
      0.018805047 = sum of:
        0.018805047 = product of:
          0.037610095 = sum of:
            0.037610095 = weight(_text_:research in 3011) [ClassicSimilarity], result of:
              0.037610095 = score(doc=3011,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2521705 = fieldWeight in 3011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3011)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While the literature identifies some broad approaches to subject analysis there is little supporting empirical evidence and few attempts to explicate any specifiable procedures. A productive step forward with indexing research would be to begin by examining how indexers actually undertake the process of subject analysis and to explore systematically factors that guide and influence this process. This would shed some light on a theory of subject analysis, clarify some of the central concepts of indexing, and provide an intelligent knowledge-base for effective, academic indexing practice
  20. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.01
    0.009402524 = product of:
      0.018805047 = sum of:
        0.018805047 = product of:
          0.037610095 = sum of:
            0.037610095 = weight(_text_:research in 4707) [ClassicSimilarity], result of:
              0.037610095 = score(doc=4707,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2521705 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4707)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira