Search (102 results, page 1 of 6)

  • × theme_ss:"Inhaltsanalyse"
  1. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.011604575 = product of:
      0.054154683 = sum of:
        0.008695048 = weight(_text_:information in 3953) [ClassicSimilarity], result of:
          0.008695048 = score(doc=3953,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.20156369 = fieldWeight in 3953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.02581711 = weight(_text_:retrieval in 3953) [ClassicSimilarity], result of:
          0.02581711 = score(doc=3953,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.34732026 = fieldWeight in 3953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.019642524 = product of:
          0.058927573 = sum of:
            0.058927573 = weight(_text_:2010 in 3953) [ClassicSimilarity], result of:
              0.058927573 = score(doc=3953,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.5013491 = fieldWeight in 3953, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
    Source
    Aslib proceedings. 62(2010) no.2, S.202-209
    Year
    2010
  2. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.0114058275 = product of:
      0.053227194 = sum of:
        0.010247213 = weight(_text_:information in 4888) [ClassicSimilarity], result of:
          0.010247213 = score(doc=4888,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23754507 = fieldWeight in 4888, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.03513263 = weight(_text_:retrieval in 4888) [ClassicSimilarity], result of:
          0.03513263 = score(doc=4888,freq=16.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.47264296 = fieldWeight in 4888, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.007847352 = product of:
          0.023542056 = sum of:
            0.023542056 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.023542056 = score(doc=4888,freq=4.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  3. Nohr, H.: Inhaltsanalyse (1999) 0.01
    0.009930287 = product of:
      0.069512 = sum of:
        0.06281855 = weight(_text_:indexierung in 3430) [ClassicSimilarity], result of:
          0.06281855 = score(doc=3430,freq=2.0), product of:
            0.13215348 = queryWeight, product of:
              5.377919 = idf(docFreq=554, maxDocs=44218)
              0.024573348 = queryNorm
            0.47534537 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.377919 = idf(docFreq=554, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.006693451 = weight(_text_:information in 3430) [ClassicSimilarity], result of:
          0.006693451 = score(doc=3430,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1551638 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
      0.14285715 = coord(2/14)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Source
    nfd Information - Wissenschaft und Praxis. 50(1999) H.2, S.69-78
  4. Yoon, J.W.: Utilizing quantitative users' reactions to represent affective meanings of an image (2010) 0.01
    0.009014237 = product of:
      0.042066436 = sum of:
        0.004183407 = weight(_text_:information in 3584) [ClassicSimilarity], result of:
          0.004183407 = score(doc=3584,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.09697737 = fieldWeight in 3584, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3584)
        0.021514257 = weight(_text_:retrieval in 3584) [ClassicSimilarity], result of:
          0.021514257 = score(doc=3584,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.28943354 = fieldWeight in 3584, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3584)
        0.016368773 = product of:
          0.049106315 = sum of:
            0.049106315 = weight(_text_:2010 in 3584) [ClassicSimilarity], result of:
              0.049106315 = score(doc=3584,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.41779095 = fieldWeight in 3584, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3584)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Emotional meaning is critical for users to retrieve relevant images. However, because emotional meanings are subject to the individual viewer's interpretation, they are considered difficult to implement when designing image retrieval systems. With the intent of making an image's emotional messages more readily accessible, this study aims to test a new approach designed to enhance the accessibility of emotional meanings during the image search process. This approach utilizes image searchers' emotional reactions, which are quantitatively measured. Broadly used quantitative measurements for emotional reactions, Semantic Differential (SD) and Self-Assessment Manikin (SAM), were selected as tools for gathering users' reactions. Emotional representations obtained from these two tools were compared with three image perception tasks: searching, describing, and sorting. A survey questionnaire with a set of 12 images was administered to 58 participants, which were tagged with basic emotions. Results demonstrated that the SAM represents basic emotions on 2-dimensional plots (pleasure and arousal dimensions), and this representation consistently corresponded to the three image perception tasks. This study provided experimental evidence that quantitative users' reactions can be a useful complementary element of current image retrieval/indexing systems. Integrating users' reactions obtained from the SAM into image browsing systems would reduce the efforts of human indexers as well as improve the effectiveness of image retrieval systems.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1345-1359
    Year
    2010
  5. Knautz, K.; Dröge, E.; Finkelmeyer, S.; Guschauski, D.; Juchem, K.; Krzmyk, C.; Miskovic, D.; Schiefer, J.; Sen, E.; Verbina, J.; Werner, N.; Stock, W.G.: Indexieren von Emotionen bei Videos (2010) 0.01
    0.0089244675 = product of:
      0.041647516 = sum of:
        0.007099477 = weight(_text_:information in 3637) [ClassicSimilarity], result of:
          0.007099477 = score(doc=3637,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 3637, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.014905514 = weight(_text_:retrieval in 3637) [ClassicSimilarity], result of:
          0.014905514 = score(doc=3637,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 3637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3637)
        0.019642524 = product of:
          0.058927573 = sum of:
            0.058927573 = weight(_text_:2010 in 3637) [ClassicSimilarity], result of:
              0.058927573 = score(doc=3637,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.5013491 = fieldWeight in 3637, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3637)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Gegenstand der empirischen Forschungsarbeit sind dargestellte wie empfundene Gefühle bei Videos. Sind Nutzer in der Lage, solche Gefühle derart konsistent zu erschließen, dass man deren Angaben für ein emotionales Videoretrieval gebrauchen kann? Wir arbeiten mit einem kontrollierten Vokabular für neun tionen (Liebe, Freude, Spaß, Überraschung, Sehnsucht, Trauer, Ärger, Ekel und Angst), einem Schieberegler zur Einstellung der jeweiligen Intensität des Gefühls und mit dem Ansatz der broad Folksonomy, lassen also unterschiedliche Nutzer die Videos taggen. Versuchspersonen bekamen insgesamt 20 Videos (bearbeitete Filme aus YouTube) vorgelegt, deren Emotionen sie indexieren sollten. Wir erhielten Angaben von 776 Probanden und entsprechend 279.360 Schiebereglereinstellungen. Die Konsistenz der Nutzervoten ist sehr hoch; die Tags führen zu stabilen Verteilungen der Emotionen für die einzelnen Videos. Die endgültige Form der Verteilungen wird schon bei relativ wenigen Nutzern (unter 100) erreicht. Es ist möglich, im Sinne der Power Tags die jeweils für ein Dokument zentralen Gefühle (soweit überhaupt vorhanden) zu separieren und für das emotionale Information Retrieval (EmIR) aufzubereiten.
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.4, S.221-236
    Year
    2010
  6. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.01
    0.008433137 = product of:
      0.039354637 = sum of:
        0.017349645 = weight(_text_:web in 4444) [ClassicSimilarity], result of:
          0.017349645 = score(doc=4444,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
        0.007099477 = weight(_text_:information in 4444) [ClassicSimilarity], result of:
          0.007099477 = score(doc=4444,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 4444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
        0.014905514 = weight(_text_:retrieval in 4444) [ClassicSimilarity], result of:
          0.014905514 = score(doc=4444,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
      0.21428572 = coord(3/14)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  7. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.007595515 = product of:
      0.035445735 = sum of:
        0.006693451 = weight(_text_:information in 5830) [ClassicSimilarity], result of:
          0.006693451 = score(doc=5830,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1551638 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.01987402 = weight(_text_:retrieval in 5830) [ClassicSimilarity], result of:
          0.01987402 = score(doc=5830,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.26736724 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.0088782655 = product of:
          0.026634796 = sum of:
            0.026634796 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.026634796 = score(doc=5830,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Date
    5. 8.2006 13:22:08
  8. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.01
    0.0067430185 = product of:
      0.047201127 = sum of:
        0.017390097 = weight(_text_:information in 1329) [ClassicSimilarity], result of:
          0.017390097 = score(doc=1329,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.40312737 = fieldWeight in 1329, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1329)
        0.029811028 = weight(_text_:retrieval in 1329) [ClassicSimilarity], result of:
          0.029811028 = score(doc=1329,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40105087 = fieldWeight in 1329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=1329)
      0.14285715 = coord(2/14)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  9. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.01
    0.006641868 = product of:
      0.046493076 = sum of:
        0.01171354 = weight(_text_:information in 5270) [ClassicSimilarity], result of:
          0.01171354 = score(doc=5270,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.27153665 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
        0.034779534 = weight(_text_:retrieval in 5270) [ClassicSimilarity], result of:
          0.034779534 = score(doc=5270,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.46789268 = fieldWeight in 5270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5270)
      0.14285715 = coord(2/14)
    
  10. Piekara, F.H.: Wie idiosynkratisch ist Wissen? : Individuelle Unterschiede im Assoziieren und bei der Anlage und Nutzung von Informationssystemen (1988) 0.01
    0.0063162465 = product of:
      0.044213723 = sum of:
        0.006693451 = weight(_text_:information in 2537) [ClassicSimilarity], result of:
          0.006693451 = score(doc=2537,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1551638 = fieldWeight in 2537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2537)
        0.03752027 = weight(_text_:frankfurt in 2537) [ClassicSimilarity], result of:
          0.03752027 = score(doc=2537,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.36736545 = fieldWeight in 2537, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0625 = fieldNorm(doc=2537)
      0.14285715 = coord(2/14)
    
    Imprint
    Frankfurt : Lang
    Theme
    Information
  11. Rosso, M.A.: User-based identification of Web genres (2008) 0.01
    0.0060622552 = product of:
      0.042435784 = sum of:
        0.038252376 = weight(_text_:web in 1863) [ClassicSimilarity], result of:
          0.038252376 = score(doc=1863,freq=14.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.47698978 = fieldWeight in 1863, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
        0.004183407 = weight(_text_:information in 1863) [ClassicSimilarity], result of:
          0.004183407 = score(doc=1863,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.09697737 = fieldWeight in 1863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
      0.14285715 = coord(2/14)
    
    Abstract
    This research explores the use of genre as a document descriptor in order to improve the effectiveness of Web searching. A major issue to be resolved is the identification of what document categories should be used as genres. As genre is a kind of folk typology, document categories must enjoy widespread recognition by their intended user groups in order to qualify as genres. Three user studies were conducted to develop a genre palette and show that it is recognizable to users. (Palette is a term used to denote a classification, attributable to Karlgren, Bretan, Dewe, Hallberg, and Wolkert, 1998.) To simplify the users' classification task, it was decided to focus on Web pages from the edu domain. The first study was a survey of user terminology for Web pages. Three participants separated 100 Web page printouts into stacks according to genre, assigning names and definitions to each genre. The second study aimed to refine the resulting set of 48 (often conceptually and lexically similar) genre names and definitions into a smaller palette of user-preferred terminology. Ten participants classified the same 100 Web pages. A set of five principles for creating a genre palette from individuals' sortings was developed, and the list of 48 was trimmed to 18 genres. The third study aimed to show that users would agree on the genres of Web pages when choosing from the genre palette. In an online experiment in which 257 participants categorized a new set of 55 pages using the 18 genres, on average, over 70% agreed on the genre of each page. Suggestions for improving the genre palette and future directions for the work are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.7, S.1053-1072
  12. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.01
    0.0060153836 = product of:
      0.042107683 = sum of:
        0.012296655 = weight(_text_:information in 1958) [ClassicSimilarity], result of:
          0.012296655 = score(doc=1958,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2850541 = fieldWeight in 1958, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
        0.029811028 = weight(_text_:retrieval in 1958) [ClassicSimilarity], result of:
          0.029811028 = score(doc=1958,freq=8.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40105087 = fieldWeight in 1958, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
      0.14285715 = coord(2/14)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1383-1392
  13. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.01
    0.0059044356 = product of:
      0.04133105 = sum of:
        0.03541482 = weight(_text_:web in 4972) [ClassicSimilarity], result of:
          0.03541482 = score(doc=4972,freq=12.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.4416067 = fieldWeight in 4972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
        0.005916231 = weight(_text_:information in 4972) [ClassicSimilarity], result of:
          0.005916231 = score(doc=4972,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 4972, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.14285715 = coord(2/14)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.163-173
  14. Pejtersen, A.M.: Implications of users' value perception for the design of knowledge based bibliographic retrieval systems (1985) 0.01
    0.005693029 = product of:
      0.039851204 = sum of:
        0.010040177 = weight(_text_:information in 2088) [ClassicSimilarity], result of:
          0.010040177 = score(doc=2088,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23274569 = fieldWeight in 2088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=2088)
        0.029811028 = weight(_text_:retrieval in 2088) [ClassicSimilarity], result of:
          0.029811028 = score(doc=2088,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40105087 = fieldWeight in 2088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=2088)
      0.14285715 = coord(2/14)
    
    Source
    2nd Symposium on Empirical Foundations of Information and Software Science, 3.-5.10.84, Atlanta
  15. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.005562706 = product of:
      0.03893894 = sum of:
        0.014346098 = weight(_text_:information in 5828) [ClassicSimilarity], result of:
          0.014346098 = score(doc=5828,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3325631 = fieldWeight in 5828, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
        0.024592843 = weight(_text_:retrieval in 5828) [ClassicSimilarity], result of:
          0.024592843 = score(doc=5828,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.33085006 = fieldWeight in 5828, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
      0.14285715 = coord(2/14)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
    Source
    Information processing and management. 20(1984), S.583-601
  16. Bednarek, M.: Intellectual access to pictorial information (1993) 0.01
    0.0055008684 = product of:
      0.038506076 = sum of:
        0.008695048 = weight(_text_:information in 5631) [ClassicSimilarity], result of:
          0.008695048 = score(doc=5631,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.20156369 = fieldWeight in 5631, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.029811028 = weight(_text_:retrieval in 5631) [ClassicSimilarity], result of:
          0.029811028 = score(doc=5631,freq=8.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40105087 = fieldWeight in 5631, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
      0.14285715 = coord(2/14)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  17. Hidderley, R.; Rafferty, P.: Democratic indexing : an approach to the retrieval of fiction (1997) 0.01
    0.005486098 = product of:
      0.038402684 = sum of:
        0.008282723 = weight(_text_:information in 1783) [ClassicSimilarity], result of:
          0.008282723 = score(doc=1783,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1920054 = fieldWeight in 1783, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1783)
        0.03011996 = weight(_text_:retrieval in 1783) [ClassicSimilarity], result of:
          0.03011996 = score(doc=1783,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40520695 = fieldWeight in 1783, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1783)
      0.14285715 = coord(2/14)
    
    Abstract
    Examines how an analytical framework to describe the contents of images may be extended to deal with time based materials like film and music. A levels of meanings table was developed and used as an indexing template for image retrieval purposes. Develops a concept of democratic indexing which focused on user interpretation. Describes the approach to image or pictorial information retrieval. Extends the approach in relation to fiction
    Source
    Information services and use. 17(1997) nos.2/3, S.101-109
  18. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.01
    0.005486098 = product of:
      0.038402684 = sum of:
        0.008282723 = weight(_text_:information in 5844) [ClassicSimilarity], result of:
          0.008282723 = score(doc=5844,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.1920054 = fieldWeight in 5844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
        0.03011996 = weight(_text_:retrieval in 5844) [ClassicSimilarity], result of:
          0.03011996 = score(doc=5844,freq=6.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.40520695 = fieldWeight in 5844, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.14285715 = coord(2/14)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types
  19. Computergestützte Inhaltsanalyse in der empirischen Sozialforschung (1983) 0.01
    0.005360039 = product of:
      0.07504054 = sum of:
        0.07504054 = weight(_text_:frankfurt in 1877) [ClassicSimilarity], result of:
          0.07504054 = score(doc=1877,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.7347309 = fieldWeight in 1877, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.125 = fieldNorm(doc=1877)
      0.071428575 = coord(1/14)
    
    Imprint
    Frankfurt : Campus
  20. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.00
    0.004422613 = product of:
      0.03095829 = sum of:
        0.025042059 = weight(_text_:web in 2669) [ClassicSimilarity], result of:
          0.025042059 = score(doc=2669,freq=6.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.3122631 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.005916231 = weight(_text_:information in 2669) [ClassicSimilarity], result of:
          0.005916231 = score(doc=2669,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 2669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.14285715 = coord(2/14)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
    Source
    Information processing and management. 52(2016) no.1, S.139-162

Languages

  • e 89
  • d 13

Types

  • a 91
  • m 5
  • x 3
  • d 2
  • el 2
  • s 1
  • More… Less…