Search (149 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.03
    0.032469407 = product of:
      0.06493881 = sum of:
        0.06493881 = sum of:
          0.009017859 = weight(_text_:a in 1416) [ClassicSimilarity], result of:
            0.009017859 = score(doc=1416,freq=10.0), product of:
              0.052761257 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045758117 = queryNorm
              0.1709182 = fieldWeight in 1416, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
          0.018723397 = weight(_text_:h in 1416) [ClassicSimilarity], result of:
            0.018723397 = score(doc=1416,freq=2.0), product of:
              0.113683715 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.045758117 = queryNorm
              0.16469726 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
          0.03719756 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
            0.03719756 = score(doc=1416,freq=2.0), product of:
              0.16023713 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045758117 = queryNorm
              0.23214069 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.023833863 = product of:
      0.047667727 = sum of:
        0.047667727 = product of:
          0.07150159 = sum of:
            0.0095056575 = weight(_text_:a in 5835) [ClassicSimilarity], result of:
              0.0095056575 = score(doc=5835,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18016359 = fieldWeight in 5835, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
            0.061995935 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.061995935 = score(doc=5835,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
    Type
    a
  3. Nagel, H.: From image sequences towards conceptual descriptions (1988) 0.02
    0.020227827 = product of:
      0.040455654 = sum of:
        0.040455654 = product of:
          0.06068348 = sum of:
            0.010754423 = weight(_text_:a in 4084) [ClassicSimilarity], result of:
              0.010754423 = score(doc=4084,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20383182 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=4084)
            0.04992906 = weight(_text_:h in 4084) [ClassicSimilarity], result of:
              0.04992906 = score(doc=4084,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.4391927 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.125 = fieldNorm(doc=4084)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  4. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.02
    0.020227827 = product of:
      0.040455654 = sum of:
        0.040455654 = product of:
          0.06068348 = sum of:
            0.010754423 = weight(_text_:a in 6830) [ClassicSimilarity], result of:
              0.010754423 = score(doc=6830,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20383182 = fieldWeight in 6830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6830)
            0.04992906 = weight(_text_:h in 6830) [ClassicSimilarity], result of:
              0.04992906 = score(doc=6830,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.4391927 = fieldWeight in 6830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.125 = fieldNorm(doc=6830)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Sprache und Datenverarbeitung. 18(1994) H.1, S.3-15
    Type
    a
  5. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.020117057 = product of:
      0.040234115 = sum of:
        0.040234115 = product of:
          0.06035117 = sum of:
            0.010754423 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
              0.010754423 = score(doc=5830,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20383182 = fieldWeight in 5830, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
            0.049596746 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.049596746 = score(doc=5830,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Type
    a
  6. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.02
    0.018324653 = product of:
      0.036649305 = sum of:
        0.036649305 = product of:
          0.054973956 = sum of:
            0.0053772116 = weight(_text_:a in 251) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=251,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
            0.049596746 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.049596746 = score(doc=251,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  7. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.02
    0.015732834 = product of:
      0.03146567 = sum of:
        0.03146567 = product of:
          0.0471985 = sum of:
            0.0033607571 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
              0.0033607571 = score(doc=4888,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.06369744 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
            0.043837745 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.043837745 = score(doc=4888,freq=4.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
    Type
    a
  8. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.02
    0.015692044 = product of:
      0.031384088 = sum of:
        0.031384088 = product of:
          0.04707613 = sum of:
            0.009878568 = weight(_text_:a in 5589) [ClassicSimilarity], result of:
              0.009878568 = score(doc=5589,freq=12.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18723148 = fieldWeight in 5589, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
            0.03719756 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.03719756 = score(doc=5589,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
    Type
    a
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.01374349 = product of:
      0.02748698 = sum of:
        0.02748698 = product of:
          0.04123047 = sum of:
            0.004032909 = weight(_text_:a in 6525) [ClassicSimilarity], result of:
              0.004032909 = score(doc=6525,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.07643694 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
            0.03719756 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.03719756 = score(doc=6525,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
    Type
    a
  10. Schulzki-Haddouti, C.; Brückner, A.: ¬Die Suche nach dem Sinn : Automatische Inhaltsanalyse nicht nur für Geheimdienste (2001) 0.01
    0.013570441 = product of:
      0.027140882 = sum of:
        0.027140882 = product of:
          0.04071132 = sum of:
            0.0095056575 = weight(_text_:a in 3133) [ClassicSimilarity], result of:
              0.0095056575 = score(doc=3133,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18016359 = fieldWeight in 3133, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3133)
            0.031205663 = weight(_text_:h in 3133) [ClassicSimilarity], result of:
              0.031205663 = score(doc=3133,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.27449545 = fieldWeight in 3133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3133)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2001, H.6, S.316-324
    Type
    a
  11. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.013560796 = product of:
      0.027121592 = sum of:
        0.027121592 = product of:
          0.040682387 = sum of:
            0.0053772116 = weight(_text_:a in 3430) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=3430,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 3430, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3430)
            0.035305176 = weight(_text_:h in 3430) [ClassicSimilarity], result of:
              0.035305176 = score(doc=3430,freq=4.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.31055614 = fieldWeight in 3430, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3430)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
    Type
    a
  12. Nohr, H.: ¬The training of librarians in content analysis : some thoughts on future necessities (1991) 0.01
    0.011426046 = product of:
      0.022852091 = sum of:
        0.022852091 = product of:
          0.034278136 = sum of:
            0.009313605 = weight(_text_:a in 5149) [ClassicSimilarity], result of:
              0.009313605 = score(doc=5149,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17652355 = fieldWeight in 5149, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5149)
            0.02496453 = weight(_text_:h in 5149) [ClassicSimilarity], result of:
              0.02496453 = score(doc=5149,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21959636 = fieldWeight in 5149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5149)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The training of librarians in content analysis undergoes influences resulting both from the realities existing in the various application fields and from technological innovations. The present contribution attempts to identify components of such training that are necessary for a future-oriented instruction, and it stresses the importance of furnishing a sound theoretical basis, especially in the light of technological developments. Purpose of the training is to provide the foundation for 'action competence' on the part of the students
    Type
    a
  13. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.01
    0.010418028 = product of:
      0.020836055 = sum of:
        0.020836055 = product of:
          0.031254083 = sum of:
            0.009410121 = weight(_text_:a in 5187) [ClassicSimilarity], result of:
              0.009410121 = score(doc=5187,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17835285 = fieldWeight in 5187, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
            0.021843962 = weight(_text_:h in 5187) [ClassicSimilarity], result of:
              0.021843962 = score(doc=5187,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
    Type
    a
  14. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.010043396 = product of:
      0.020086791 = sum of:
        0.020086791 = product of:
          0.030130185 = sum of:
            0.011406789 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
              0.011406789 = score(doc=2203,freq=16.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2161963 = fieldWeight in 2203, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
            0.018723397 = weight(_text_:h in 2203) [ClassicSimilarity], result of:
              0.018723397 = score(doc=2203,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.16469726 = fieldWeight in 2203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Type
    a
  15. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.01
    0.00929558 = product of:
      0.01859116 = sum of:
        0.01859116 = product of:
          0.027886739 = sum of:
            0.0058210026 = weight(_text_:a in 2667) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=2667,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 2667, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2667)
            0.022065736 = weight(_text_:h in 2667) [ClassicSimilarity], result of:
              0.022065736 = score(doc=2667,freq=4.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1940976 = fieldWeight in 2667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2667)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
    Type
    a
  16. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.009247085 = product of:
      0.01849417 = sum of:
        0.01849417 = product of:
          0.027741255 = sum of:
            0.009017859 = weight(_text_:a in 4471) [ClassicSimilarity], result of:
              0.009017859 = score(doc=4471,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1709182 = fieldWeight in 4471, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
            0.018723397 = weight(_text_:h in 4471) [ClassicSimilarity], result of:
              0.018723397 = score(doc=4471,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.16469726 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
    Type
    a
  17. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.009205546 = product of:
      0.018411092 = sum of:
        0.018411092 = product of:
          0.027616639 = sum of:
            0.009017859 = weight(_text_:a in 2293) [ClassicSimilarity], result of:
              0.009017859 = score(doc=2293,freq=40.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1709182 = fieldWeight in 2293, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
            0.01859878 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.01859878 = score(doc=2293,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  18. Volpers, H.: Inhaltsanalyse (2013) 0.01
    0.008849675 = product of:
      0.01769935 = sum of:
        0.01769935 = product of:
          0.026549023 = sum of:
            0.0047050603 = weight(_text_:a in 1018) [ClassicSimilarity], result of:
              0.0047050603 = score(doc=1018,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.089176424 = fieldWeight in 1018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1018)
            0.021843962 = weight(_text_:h in 1018) [ClassicSimilarity], result of:
              0.021843962 = score(doc=1018,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 1018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1018)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  19. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.01
    0.008743494 = product of:
      0.017486988 = sum of:
        0.017486988 = product of:
          0.02623048 = sum of:
            0.010627648 = weight(_text_:a in 1590) [ClassicSimilarity], result of:
              0.010627648 = score(doc=1590,freq=20.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20142901 = fieldWeight in 1590, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
            0.015602832 = weight(_text_:h in 1590) [ClassicSimilarity], result of:
              0.015602832 = score(doc=1590,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13724773 = fieldWeight in 1590, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
    Type
    a
  20. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.01
    0.008142265 = product of:
      0.01628453 = sum of:
        0.01628453 = product of:
          0.024426792 = sum of:
            0.0057033943 = weight(_text_:a in 213) [ClassicSimilarity], result of:
              0.0057033943 = score(doc=213,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10809815 = fieldWeight in 213, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=213)
            0.018723397 = weight(_text_:h in 213) [ClassicSimilarity], result of:
              0.018723397 = score(doc=213,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.16469726 = fieldWeight in 213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=213)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Information - Wissenschaft und Praxis. 53(2002) H.1, S.15-21
    Type
    a

Authors

Languages

  • e 131
  • d 16
  • f 1
  • nl 1
  • More… Less…

Types

  • a 139
  • m 4
  • x 4
  • el 3
  • d 1
  • s 1
  • More… Less…