Search (96 results, page 1 of 5)

  • × theme_ss:"Inhaltsanalyse"
  1. Renouf, A.: Making sense of text : automated approaches to meaning extraction (1993) 0.04
    0.035069317 = product of:
      0.21041588 = sum of:
        0.023125017 = weight(_text_:information in 7111) [ClassicSimilarity], result of:
          0.023125017 = score(doc=7111,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3840108 = fieldWeight in 7111, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=7111)
        0.18729086 = weight(_text_:extraction in 7111) [ClassicSimilarity], result of:
          0.18729086 = score(doc=7111,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.9189739 = fieldWeight in 7111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.109375 = fieldNorm(doc=7111)
      0.16666667 = coord(2/12)
    
    Imprint
    Oxford : Learned Information
    Source
    Online information 93: 17th International Online Meeting Proceedings, London, 7.-9.12.1993. Ed. by D.I. Raitt et al
  2. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.03
    0.031146718 = product of:
      0.12458687 = sum of:
        0.049438346 = weight(_text_:web in 4972) [ClassicSimilarity], result of:
          0.049438346 = score(doc=4972,freq=12.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.4416067 = fieldWeight in 4972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
        0.008258934 = weight(_text_:information in 4972) [ClassicSimilarity], result of:
          0.008258934 = score(doc=4972,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 4972, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
        0.06688959 = weight(_text_:extraction in 4972) [ClassicSimilarity], result of:
          0.06688959 = score(doc=4972,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 4972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.25 = coord(3/12)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.163-173
  3. From information to knowledge : conceptual and content analysis by computer (1995) 0.02
    0.023486678 = product of:
      0.09394671 = sum of:
        0.008258934 = weight(_text_:information in 5392) [ClassicSimilarity], result of:
          0.008258934 = score(doc=5392,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 5392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
        0.06688959 = weight(_text_:extraction in 5392) [ClassicSimilarity], result of:
          0.06688959 = score(doc=5392,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 5392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
        0.018798191 = weight(_text_:system in 5392) [ClassicSimilarity], result of:
          0.018798191 = score(doc=5392,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.17398985 = fieldWeight in 5392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.25 = coord(3/12)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
  4. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.02
    0.023435093 = product of:
      0.14061056 = sum of:
        0.008175928 = weight(_text_:information in 5481) [ClassicSimilarity], result of:
          0.008175928 = score(doc=5481,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 5481, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
        0.13243464 = weight(_text_:extraction in 5481) [ClassicSimilarity], result of:
          0.13243464 = score(doc=5481,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.6498127 = fieldWeight in 5481, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
      0.16666667 = coord(2/12)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
  5. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.02
    0.019889804 = product of:
      0.11933882 = sum of:
        0.08026751 = weight(_text_:extraction in 4471) [ClassicSimilarity], result of:
          0.08026751 = score(doc=4471,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.39384598 = fieldWeight in 4471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.046875 = fieldNorm(doc=4471)
        0.039071307 = weight(_text_:system in 4471) [ClassicSimilarity], result of:
          0.039071307 = score(doc=4471,freq=6.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.36163113 = fieldWeight in 4471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4471)
      0.16666667 = coord(2/12)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
  6. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.017617544 = product of:
      0.07047018 = sum of:
        0.009343918 = weight(_text_:information in 5830) [ClassicSimilarity], result of:
          0.009343918 = score(doc=5830,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1551638 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.04253545 = weight(_text_:system in 5830) [ClassicSimilarity], result of:
          0.04253545 = score(doc=5830,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.01859081 = product of:
          0.03718162 = sum of:
            0.03718162 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.03718162 = score(doc=5830,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  7. Rosso, M.A.: User-based identification of Web genres (2008) 0.01
    0.009873245 = product of:
      0.059239466 = sum of:
        0.053399518 = weight(_text_:web in 1863) [ClassicSimilarity], result of:
          0.053399518 = score(doc=1863,freq=14.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.47698978 = fieldWeight in 1863, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
        0.0058399485 = weight(_text_:information in 1863) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=1863,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 1863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
      0.16666667 = coord(2/12)
    
    Abstract
    This research explores the use of genre as a document descriptor in order to improve the effectiveness of Web searching. A major issue to be resolved is the identification of what document categories should be used as genres. As genre is a kind of folk typology, document categories must enjoy widespread recognition by their intended user groups in order to qualify as genres. Three user studies were conducted to develop a genre palette and show that it is recognizable to users. (Palette is a term used to denote a classification, attributable to Karlgren, Bretan, Dewe, Hallberg, and Wolkert, 1998.) To simplify the users' classification task, it was decided to focus on Web pages from the edu domain. The first study was a survey of user terminology for Web pages. Three participants separated 100 Web page printouts into stacks according to genre, assigning names and definitions to each genre. The second study aimed to refine the resulting set of 48 (often conceptually and lexically similar) genre names and definitions into a smaller palette of user-preferred terminology. Ten participants classified the same 100 Web pages. A set of five principles for creating a genre palette from individuals' sortings was developed, and the list of 48 was trimmed to 18 genres. The third study aimed to show that users would agree on the genres of Web pages when choosing from the genre palette. In an online experiment in which 257 participants categorized a new set of 55 pages using the 18 genres, on average, over 70% agreed on the genre of each page. Suggestions for improving the genre palette and future directions for the work are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.7, S.1053-1072
  8. Pejtersen, A.M.: Design of a computer-aided user-system dialogue based on an analysis of users' search behaviour (1984) 0.01
    0.009855256 = product of:
      0.059131537 = sum of:
        0.014015877 = weight(_text_:information in 1044) [ClassicSimilarity], result of:
          0.014015877 = score(doc=1044,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 1044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1044)
        0.04511566 = weight(_text_:system in 1044) [ClassicSimilarity], result of:
          0.04511566 = score(doc=1044,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.41757566 = fieldWeight in 1044, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.09375 = fieldNorm(doc=1044)
      0.16666667 = coord(2/12)
    
    Source
    Social science information studies. 4(1984), S.167-183
  9. Schulzki-Haddouti, C.; Brückner, A.: ¬Die Suche nach dem Sinn : Automatische Inhaltsanalyse nicht nur für Geheimdienste (2001) 0.01
    0.0078838635 = product of:
      0.094606355 = sum of:
        0.094606355 = weight(_text_:suche in 3133) [ClassicSimilarity], result of:
          0.094606355 = score(doc=3133,freq=2.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.5520025 = fieldWeight in 3133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.078125 = fieldNorm(doc=3133)
      0.083333336 = coord(1/12)
    
  10. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.007724053 = product of:
      0.046344317 = sum of:
        0.020026851 = weight(_text_:information in 5828) [ClassicSimilarity], result of:
          0.020026851 = score(doc=5828,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3325631 = fieldWeight in 5828, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
        0.026317468 = weight(_text_:system in 5828) [ClassicSimilarity], result of:
          0.026317468 = score(doc=5828,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.2435858 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
      0.16666667 = coord(2/12)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
    Source
    Information processing and management. 20(1984), S.583-601
  11. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.0075657414 = product of:
      0.045394447 = sum of:
        0.008175928 = weight(_text_:information in 3413) [ClassicSimilarity], result of:
          0.008175928 = score(doc=3413,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 3413, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
        0.03721852 = weight(_text_:system in 3413) [ClassicSimilarity], result of:
          0.03721852 = score(doc=3413,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.34448233 = fieldWeight in 3413, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
      0.16666667 = coord(2/12)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
    Footnote
    Rez. in: Knowledge organization 21(1994) no.3, S.165-167 (W. Bies); JASIS 46(1995) no.5, S.389-390 (E.G. Bierbaum); Canadian journal of information and library science 20(1995) nos.3/4, S.52-53 (L. Rees-Potter)
  12. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.0072028544 = product of:
      0.043217126 = sum of:
        0.03495819 = weight(_text_:web in 2669) [ClassicSimilarity], result of:
          0.03495819 = score(doc=2669,freq=6.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.3122631 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.008258934 = weight(_text_:information in 2669) [ClassicSimilarity], result of:
          0.008258934 = score(doc=2669,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 2669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.16666667 = coord(2/12)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
    Source
    Information processing and management. 52(2016) no.1, S.139-162
  13. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.01
    0.0066260635 = product of:
      0.03975638 = sum of:
        0.016517868 = weight(_text_:information in 5835) [ClassicSimilarity], result of:
          0.016517868 = score(doc=5835,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.27429342 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.023238512 = product of:
          0.046477024 = sum of:
            0.046477024 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.046477024 = score(doc=5835,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  14. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.006606579 = product of:
      0.026426315 = sum of:
        0.0035039692 = weight(_text_:information in 2293) [ClassicSimilarity], result of:
          0.0035039692 = score(doc=2293,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.058186423 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.015950793 = weight(_text_:system in 2293) [ClassicSimilarity], result of:
          0.015950793 = score(doc=2293,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.14763528 = fieldWeight in 2293, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.006971553 = product of:
          0.013943106 = sum of:
            0.013943106 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.013943106 = score(doc=2293,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  15. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.0057826564 = product of:
      0.03469594 = sum of:
        0.012138106 = weight(_text_:information in 3953) [ClassicSimilarity], result of:
          0.012138106 = score(doc=3953,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.20156369 = fieldWeight in 3953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.02255783 = weight(_text_:system in 3953) [ClassicSimilarity], result of:
          0.02255783 = score(doc=3953,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.20878783 = fieldWeight in 3953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
      0.16666667 = coord(2/12)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
  16. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.01
    0.005748899 = product of:
      0.034493394 = sum of:
        0.008175928 = weight(_text_:information in 7296) [ClassicSimilarity], result of:
          0.008175928 = score(doc=7296,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
        0.026317468 = weight(_text_:system in 7296) [ClassicSimilarity], result of:
          0.026317468 = score(doc=7296,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.2435858 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.16666667 = coord(2/12)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  17. Solomon, P.: Access to fiction for children : a user-based assessment of options and opportunities (1997) 0.01
    0.005748899 = product of:
      0.034493394 = sum of:
        0.008175928 = weight(_text_:information in 5845) [ClassicSimilarity], result of:
          0.008175928 = score(doc=5845,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 5845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
        0.026317468 = weight(_text_:system in 5845) [ClassicSimilarity], result of:
          0.026317468 = score(doc=5845,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.2435858 = fieldWeight in 5845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
      0.16666667 = coord(2/12)
    
    Abstract
    Reports on a study of children's intentions, purposes, search terms, strategies, successes and breakdowns in accessing fiction. Data was gathered using naturalistic methods of persistent, intensive observation and questioning with children in several school library media centres in the USA, including 997 OPAC transactions. Analyzes the data and highlights aspects of the broader context of the system which may help in development of mechanisms for electronic access
    Source
    Information services and use. 17(1997) nos.2/3, S.139-146
  18. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.01
    0.005688411 = product of:
      0.034130465 = sum of:
        0.024219744 = weight(_text_:web in 4444) [ClassicSimilarity], result of:
          0.024219744 = score(doc=4444,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.21634221 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
        0.009910721 = weight(_text_:information in 4444) [ClassicSimilarity], result of:
          0.009910721 = score(doc=4444,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.16457605 = fieldWeight in 4444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
      0.16666667 = coord(2/12)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  19. Allen, R.B.; Wu, Y.: Metrics for the scope of a collection (2005) 0.01
    0.005204614 = product of:
      0.031227682 = sum of:
        0.024219744 = weight(_text_:web in 4570) [ClassicSimilarity], result of:
          0.024219744 = score(doc=4570,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.21634221 = fieldWeight in 4570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4570)
        0.0070079383 = weight(_text_:information in 4570) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=4570,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 4570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4570)
      0.16666667 = coord(2/12)
    
    Abstract
    Some collections cover many topics, while others are narrowly focused an a limited number of topics. We introduce the concept of the "scope" of a collection of documents and we compare two ways of measuring lt. These measures are based an the distances between documents. The first uses the overlap of words between pairs of documents. The second measure uses a novel method that calculates the semantic relatedness to pairs of words from the documents. Those values are combined to obtain an overall distance between the documents. The main validation for the measures compared Web pages categorized by Yahoo. Sets of pages sampied from broad categories were determined to have a higher scope than sets derived from subcategories. The measure was significant and confirmed the expected difference in scope. Finally, we discuss other measures related to scope.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.12, S.1243-1249
  20. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.005122834 = product of:
      0.030737001 = sum of:
        0.014304894 = weight(_text_:information in 4888) [ClassicSimilarity], result of:
          0.014304894 = score(doc=4888,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23754507 = fieldWeight in 4888, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.016432108 = product of:
          0.032864217 = sum of:
            0.032864217 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.032864217 = score(doc=4888,freq=4.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22

Languages

  • e 87
  • d 9

Types

  • a 88
  • m 4
  • d 2
  • el 2
  • s 1
  • x 1
  • More… Less…