Search (118 results, page 1 of 6)

  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.05
    0.054784253 = product of:
      0.08217638 = sum of:
        0.015692718 = weight(_text_:in in 5589) [ClassicSimilarity], result of:
          0.015692718 = score(doc=5589,freq=12.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.22087781 = fieldWeight in 5589, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.06648366 = sum of:
          0.024024425 = weight(_text_:science in 5589) [ClassicSimilarity], result of:
            0.024024425 = score(doc=5589,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.17461908 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.042459235 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.042459235 = score(doc=5589,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.05
    0.052870713 = product of:
      0.079306066 = sum of:
        0.009247023 = weight(_text_:in in 4888) [ClassicSimilarity], result of:
          0.009247023 = score(doc=4888,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1301535 = fieldWeight in 4888, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.070059046 = sum of:
          0.020020355 = weight(_text_:science in 4888) [ClassicSimilarity], result of:
            0.020020355 = score(doc=4888,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.1455159 = fieldWeight in 4888, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
          0.050038688 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.050038688 = score(doc=4888,freq=4.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.27358043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  3. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.04
    0.041589975 = product of:
      0.06238496 = sum of:
        0.017084066 = weight(_text_:in in 7510) [ClassicSimilarity], result of:
          0.017084066 = score(doc=7510,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.24046129 = fieldWeight in 7510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=7510)
        0.045300893 = product of:
          0.09060179 = sum of:
            0.09060179 = weight(_text_:science in 7510) [ClassicSimilarity], result of:
              0.09060179 = score(doc=7510,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.6585298 = fieldWeight in 7510, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Library and information science research. 12(1990) no.3, S.251-262
  4. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.04
    0.036935367 = product of:
      0.1108061 = sum of:
        0.1108061 = sum of:
          0.04004071 = weight(_text_:science in 5835) [ClassicSimilarity], result of:
            0.04004071 = score(doc=5835,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.2910318 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.07076539 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.07076539 = score(doc=5835,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  5. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.03
    0.026924279 = product of:
      0.040386416 = sum of:
        0.01208026 = weight(_text_:in in 5830) [ClassicSimilarity], result of:
          0.01208026 = score(doc=5830,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.17003182 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.028306156 = product of:
          0.056612313 = sum of:
            0.056612313 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.056612313 = score(doc=5830,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  6. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.02
    0.024565458 = product of:
      0.036848187 = sum of:
        0.008542033 = weight(_text_:in in 251) [ClassicSimilarity], result of:
          0.008542033 = score(doc=251,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.120230645 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.028306156 = product of:
          0.056612313 = sum of:
            0.056612313 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.056612313 = score(doc=251,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die dritte Session, die von Michael Vielhaber vom Österreichischen Rundfunk moderiert wurde, machte die Teilnehmerinnen und Teilnehmer mit zukunftsweisenden Werkzeugen und Konzepten zur KI-unterstützten Erschließung von Audio- und Videodateien bekannt. Alle vier vorgestellten Technologien bewähren sich bereits in ihren praktischen Anwendungsumgebungen.
    Date
    22. 5.2021 12:43:05
  7. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.02
    0.024558317 = product of:
      0.036837474 = sum of:
        0.012813049 = weight(_text_:in in 1329) [ClassicSimilarity], result of:
          0.012813049 = score(doc=1329,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 1329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=1329)
        0.024024425 = product of:
          0.04804885 = sum of:
            0.04804885 = weight(_text_:science in 1329) [ClassicSimilarity], result of:
              0.04804885 = score(doc=1329,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.34923816 = fieldWeight in 1329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  8. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.02
    0.023703363 = product of:
      0.035555042 = sum of:
        0.014325427 = weight(_text_:in in 6525) [ClassicSimilarity], result of:
          0.014325427 = score(doc=6525,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.20163295 = fieldWeight in 6525, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.021229617 = product of:
          0.042459235 = sum of:
            0.042459235 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.042459235 = score(doc=6525,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  9. Chu, C.M.; O'Brien, A.: Subject analysis : the critical first stage in indexing (1993) 0.02
    0.023420796 = product of:
      0.035131194 = sum of:
        0.014325427 = weight(_text_:in in 6472) [ClassicSimilarity], result of:
          0.014325427 = score(doc=6472,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.20163295 = fieldWeight in 6472, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6472)
        0.020805765 = product of:
          0.04161153 = sum of:
            0.04161153 = weight(_text_:science in 6472) [ClassicSimilarity], result of:
              0.04161153 = score(doc=6472,freq=6.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30244917 = fieldWeight in 6472, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6472)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Studies of indexing neglect the first stage of the process, that is, subject analysis. In this study, novice indexers were asked to analyse three short, popular journal articles; to express the general subject as well as the primary and secondary topics in natural laguage statements; to state what influenced the analysis and to comment on the ease or difficulty of this process. The factors which influenced the process were: the subject discipline concerned, factual vs. subjective nature of the text, complexity of the subject, clarity of text, possible support offered by bibliographic apparatus such as title, etc. The findings showed that with the social science and science texts, the general subject could be determined with ease, while this was more difficult with the humanities text. Clear evidence emerged of the importance of bibliographical apparatus in defining the general subject. There was varying difficulty in determining the primary and secondarx topics
    Source
    Journal of information science. 19(1993), S.439-454
  10. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.02
    0.022695113 = product of:
      0.034042668 = sum of:
        0.012813049 = weight(_text_:in in 1416) [ClassicSimilarity], result of:
          0.012813049 = score(doc=1416,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 1416, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.021229617 = product of:
          0.042459235 = sum of:
            0.042459235 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.042459235 = score(doc=1416,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.02
    0.021843314 = product of:
      0.03276497 = sum of:
        0.012945832 = weight(_text_:in in 6032) [ClassicSimilarity], result of:
          0.012945832 = score(doc=6032,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1822149 = fieldWeight in 6032, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6032)
        0.01981914 = product of:
          0.03963828 = sum of:
            0.03963828 = weight(_text_:science in 6032) [ClassicSimilarity], result of:
              0.03963828 = score(doc=6032,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2881068 = fieldWeight in 6032, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6032)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Theories of aboutness and theories of subject analysis and of related concepts such as topicality are often isolated from each other in the literature of information science (IS) and related disciplines. In IS it is important to consider the nature and meaning of these concepts, which is closely related to theoretical and metatheoretical issues in information retrieval (IR). A theory of IR must specify which concepts should be regarded as synonymous concepts and explain how the meaning of the nonsynonymous concepts should be defined
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.9, S.774-778
  12. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.02
    0.020465266 = product of:
      0.030697897 = sum of:
        0.010677542 = weight(_text_:in in 4236) [ClassicSimilarity], result of:
          0.010677542 = score(doc=4236,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 4236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
        0.020020355 = product of:
          0.04004071 = sum of:
            0.04004071 = weight(_text_:science in 4236) [ClassicSimilarity], result of:
              0.04004071 = score(doc=4236,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2910318 = fieldWeight in 4236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4236)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents an overview of some recent developments in the clause-based content analysis of linguistic data. Introduces network analysis of evaluative texts, for the analysis of cognitive maps, and linguistic content analysis. Focuses on the types of substantive inferences afforded by the three approaches
    Source
    Social science computer review. 11(1993) no.3, S.283-291
  13. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.02
    0.020465266 = product of:
      0.030697897 = sum of:
        0.010677542 = weight(_text_:in in 3549) [ClassicSimilarity], result of:
          0.010677542 = score(doc=3549,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 3549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
        0.020020355 = product of:
          0.04004071 = sum of:
            0.04004071 = weight(_text_:science in 3549) [ClassicSimilarity], result of:
              0.04004071 = score(doc=3549,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2910318 = fieldWeight in 3549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3549)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
    Imprint
    Amsterdam : Elsevier Science Publishers
  14. Shaw, R.: Information organization and the philosophy of history (2013) 0.02
    0.02025958 = product of:
      0.030389369 = sum of:
        0.010570227 = weight(_text_:in in 946) [ClassicSimilarity], result of:
          0.010570227 = score(doc=946,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.14877784 = fieldWeight in 946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=946)
        0.01981914 = product of:
          0.03963828 = sum of:
            0.03963828 = weight(_text_:science in 946) [ClassicSimilarity], result of:
              0.03963828 = score(doc=946,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2881068 = fieldWeight in 946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=946)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The philosophy of history can help articulate problems relevant to information organization. One such problem is "aboutness": How do texts relate to the world? In response to this problem, philosophers of history have developed theories of colligation describing how authors bind together phenomena under organizing concepts. Drawing on these ideas, I present a theory of subject analysis that avoids the problematic illusion of an independent "landscape" of subjects. This theory points to a broad vision of the future of information organization and some specific challenges to be met.
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1092-1103
  15. Fremery, W. De; Buckland, M.K.: Context, relevance, and labor (2022) 0.02
    0.019910641 = product of:
      0.02986596 = sum of:
        0.009060195 = weight(_text_:in in 4240) [ClassicSimilarity], result of:
          0.009060195 = score(doc=4240,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.12752387 = fieldWeight in 4240, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4240)
        0.020805765 = product of:
          0.04161153 = sum of:
            0.04161153 = weight(_text_:science in 4240) [ClassicSimilarity], result of:
              0.04161153 = score(doc=4240,freq=6.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30244917 = fieldWeight in 4240, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4240)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since information science concerns the transmission of records, it concerns context. The transmission of documents ensures their arrival in new contexts. Documents and their copies are spread across times and places. The amount of labor required to discover and retrieve relevant documents is also formulated by context. Thus, any serious consideration of communication and of information technologies quickly leads to a concern with context, relevance, and labor. Information scientists have developed many theories of context, relevance, and labor but not a framework for organizing them and describing their relationship with one another. We propose the words context and relevance can be used to articulate a useful framework for considering the diversity of approaches to context and relevance in information science, as well as their relations with each other and with labor.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.9, S.1268-1278
  16. Hjoerland, B.: ¬The concept of 'subject' in information science (1992) 0.02
    0.019867256 = product of:
      0.029800884 = sum of:
        0.012813049 = weight(_text_:in in 2247) [ClassicSimilarity], result of:
          0.012813049 = score(doc=2247,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 2247, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2247)
        0.016987834 = product of:
          0.03397567 = sum of:
            0.03397567 = weight(_text_:science in 2247) [ClassicSimilarity], result of:
              0.03397567 = score(doc=2247,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.24694869 = fieldWeight in 2247, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2247)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article presents a theoretical investigation of the concept of 'subject' or 'subject matter' in library and information science. Most conceptions of 'subject' in the literature are not explicit but implicit. Various indexing and classification theories, including automatic indexing and citation indexing, have their own more or less implicit concepts of subject. This fact puts the emphasis on making the implicit theorie of 'subject matter' explicit as the first step. ... The different conceptions of 'subject' can therefore be classified into epistemological positions, e.g. 'subjective idealism' (or the empiric/positivistic viewpoint), 'objective idealism' (the rationalistic viewpoint), 'pragmatism' and 'materialism/realism'. The third and final step is to propose a new theory of subject matter based on an explicit theory of knowledge. In this article this is done from the point of view of a realistic/materialistic epistemology. From this standpoint the subject of a document is defined as the epistemological potentials of that document
  17. Weinberg, B.H.: Why indexing fails the researcher (1988) 0.02
    0.018854395 = product of:
      0.02828159 = sum of:
        0.014125061 = weight(_text_:in in 703) [ClassicSimilarity], result of:
          0.014125061 = score(doc=703,freq=14.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.19881277 = fieldWeight in 703, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=703)
        0.014156529 = product of:
          0.028313057 = sum of:
            0.028313057 = weight(_text_:science in 703) [ClassicSimilarity], result of:
              0.028313057 = score(doc=703,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.20579056 = fieldWeight in 703, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=703)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    It is a truism in information science that indexing is associated with 'aboutness', and that index terms that accurately represent what a document is about will serve the needs of the user/searcher well. It is contended in this paper that indexing which is limited to the representation of aboutness serves the novice in a discipline adequately, but does not serve the scholar or researcher, who is concerned with highly specific aspects of or points-of-view on a subject. The linguistic analogs of 'aboutness' and 'aspects' are 'topic' and 'comment' respectively. Serial indexing services deal with topics at varyng levels of specificity, but neglect comment almost entirely. This may explain the underutilization of secondary information services by scholars, as has been repeatedly demonstrated in user studies. It may also account for the incomplete lists of bibliographic references in many research papers. Natural language searching of fulltext databases does not solve this problem, because the aspect of a topic of interest to researchers is often inexpressible in concrete terms. The thesis is illustrated with examples of indexing failures in research projects the author has conducted on a range of linguistic and library-information science topics. Finally, the question of whether indexing can be improved to meet the needs of researchers is examined
  18. Tibbo, H.R.: Abstracting across the disciplines : a content analysis of abstracts for the natural sciences, the social sciences, and the humanities with implications for abstracting standards and online information retrieval (1992) 0.02
    0.01873103 = product of:
      0.028096544 = sum of:
        0.01208026 = weight(_text_:in in 2536) [ClassicSimilarity], result of:
          0.01208026 = score(doc=2536,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.17003182 = fieldWeight in 2536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
        0.016016284 = product of:
          0.032032568 = sum of:
            0.032032568 = weight(_text_:science in 2536) [ClassicSimilarity], result of:
              0.032032568 = score(doc=2536,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.23282544 = fieldWeight in 2536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2536)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports on a comparison of the "content categories" listed in the ANSI/ISO abstracting standards to actual content found in abstracts from the sciences, social sciences, and the humanities. The preliminary findings question the fundamental concept underlying these standards, namely, that any one set of standards and generalized instructions can describe and elicit the optimal configuration for abstracts from all subject areas
    Source
    Library and information science research. 14(1992) no.1, S.31-56
  19. Bertrand-Gastaldy, S.B.: Convergent theories : using a multidisciplinary approach to expalin indexing results (1995) 0.02
    0.018722842 = product of:
      0.028084261 = sum of:
        0.011096427 = weight(_text_:in in 3832) [ClassicSimilarity], result of:
          0.011096427 = score(doc=3832,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1561842 = fieldWeight in 3832, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3832)
        0.016987834 = product of:
          0.03397567 = sum of:
            0.03397567 = weight(_text_:science in 3832) [ClassicSimilarity], result of:
              0.03397567 = score(doc=3832,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.24694869 = fieldWeight in 3832, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3832)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In order to explain how indexers chose their keywords and how their results can differ between each other, focuses on certain properties of the terms rather than on the terms themselves. Bases the study on 4 premises borrowed from research in semiotics, cognitive science, discourse analysis and reading theories. Reports on the methodology used, and some of the findings obtained by comparing properties of indexing terms with the content of titles and abstracts of 844 bibliographic records extracted from a database on environment. Characterizes some tendencies of the special reading which indexing constitutes as a series of properties of the selected or rejected works and explains the differences among several indexers by the porperties toward which they are inclined
    Source
    Forging new partnerships in information: converging technologies. Proceedings of the 58th Annual Meeting of the American Society for Information Science, ASIS'95, Chicago, IL, 9-12 October 1995. Ed.: T. Kinney
  20. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.02
    0.018155862 = product of:
      0.027233792 = sum of:
        0.013077264 = weight(_text_:in in 2069) [ClassicSimilarity], result of:
          0.013077264 = score(doc=2069,freq=12.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18406484 = fieldWeight in 2069, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
        0.014156529 = product of:
          0.028313057 = sum of:
            0.028313057 = weight(_text_:science in 2069) [ClassicSimilarity], result of:
              0.028313057 = score(doc=2069,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.20579056 = fieldWeight in 2069, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.1, S.55-63

Authors

Languages

  • e 108
  • d 9
  • nl 1
  • More… Less…