Search (35 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.05
    0.046794858 = product of:
      0.070192285 = sum of:
        0.045588657 = weight(_text_:management in 4888) [ClassicSimilarity], result of:
          0.045588657 = score(doc=4888,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2633291 = fieldWeight in 4888, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.02460363 = product of:
          0.04920726 = sum of:
            0.04920726 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.04920726 = score(doc=4888,freq=4.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  2. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.04
    0.043221936 = product of:
      0.0648329 = sum of:
        0.045130465 = weight(_text_:management in 7296) [ClassicSimilarity], result of:
          0.045130465 = score(doc=7296,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
        0.019702438 = product of:
          0.039404877 = sum of:
            0.039404877 = weight(_text_:system in 7296) [ClassicSimilarity], result of:
              0.039404877 = score(doc=7296,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.2435858 = fieldWeight in 7296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  3. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.04
    0.043221936 = product of:
      0.0648329 = sum of:
        0.045130465 = weight(_text_:management in 5828) [ClassicSimilarity], result of:
          0.045130465 = score(doc=5828,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
        0.019702438 = product of:
          0.039404877 = sum of:
            0.039404877 = weight(_text_:system in 5828) [ClassicSimilarity], result of:
              0.039404877 = score(doc=5828,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.2435858 = fieldWeight in 5828, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
    Source
    Information processing and management. 20(1984), S.583-601
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.039786518 = product of:
      0.11935955 = sum of:
        0.11935955 = sum of:
          0.0636879 = weight(_text_:system in 5830) [ClassicSimilarity], result of:
            0.0636879 = score(doc=5830,freq=4.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.3936941 = fieldWeight in 5830, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.05567166 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.05567166 = score(doc=5830,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  5. Mai, J.-E.: Analysis in indexing : document and domain centered approaches (2005) 0.02
    0.015043489 = product of:
      0.045130465 = sum of:
        0.045130465 = weight(_text_:management in 1024) [ClassicSimilarity], result of:
          0.045130465 = score(doc=1024,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 1024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1024)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 41(2005) no.3, S.599-611
  6. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.014919944 = product of:
      0.044759832 = sum of:
        0.044759832 = sum of:
          0.02388296 = weight(_text_:system in 2293) [ClassicSimilarity], result of:
            0.02388296 = score(doc=2293,freq=4.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.14763528 = fieldWeight in 2293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.020876871 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.020876871 = score(doc=2293,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.33333334 = coord(1/3)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
  7. Merrill, W.S.: Code for classifiers : principles governing the consistent placing of books in a system of classification (1969) 0.01
    0.013134959 = product of:
      0.039404877 = sum of:
        0.039404877 = product of:
          0.07880975 = sum of:
            0.07880975 = weight(_text_:system in 1640) [ClassicSimilarity], result of:
              0.07880975 = score(doc=1640,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.4871716 = fieldWeight in 1640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1640)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  8. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 2576) [ClassicSimilarity], result of:
          0.038683258 = score(doc=2576,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 2576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2576)
      0.33333334 = coord(1/3)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
  9. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.01
    0.011598263 = product of:
      0.03479479 = sum of:
        0.03479479 = product of:
          0.06958958 = sum of:
            0.06958958 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.06958958 = score(doc=5835,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:44
  10. Nahotko, M.: Genre groups in knowledge organization (2016) 0.01
    0.011375209 = product of:
      0.034125626 = sum of:
        0.034125626 = product of:
          0.06825125 = sum of:
            0.06825125 = weight(_text_:system in 5139) [ClassicSimilarity], result of:
              0.06825125 = score(doc=5139,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.42190298 = fieldWeight in 5139, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5139)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The article is an introduction to the development of Andersen's concept of textual tools used in knowledge organization (KO) in light of the theory of genres and activity systems. In particular, the question is based on the concepts of genre connectivity and genre group, in addition to previously established concepts such as genre hierarchy, set, system, and repertoire. Five genre groups used in KO are described. The analysis of groups, systems, and selected genres used in KO is provided, based on the method proposed by Yates and Orlikowski. The aim is to show the genre system as a part of the activity system, and thus as a framework for KO.
  11. Austin, J.; Pejtersen, A.M.: Fiction retrieval: experimental design and evaluation of a search system based on user's value criteria. Pt.1 (1983) 0.01
    0.011258537 = product of:
      0.03377561 = sum of:
        0.03377561 = product of:
          0.06755122 = sum of:
            0.06755122 = weight(_text_:system in 142) [ClassicSimilarity], result of:
              0.06755122 = score(doc=142,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.41757566 = fieldWeight in 142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=142)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Pejtersen, A.M.: Design of a computer-aided user-system dialogue based on an analysis of users' search behaviour (1984) 0.01
    0.011258537 = product of:
      0.03377561 = sum of:
        0.03377561 = product of:
          0.06755122 = sum of:
            0.06755122 = weight(_text_:system in 1044) [ClassicSimilarity], result of:
              0.06755122 = score(doc=1044,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.41757566 = fieldWeight in 1044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1044)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Rorissa, A.: User-generated descriptions of individual images versus labels of groups of images : a comparison using basic level theory (2008) 0.01
    0.010745349 = product of:
      0.032236047 = sum of:
        0.032236047 = weight(_text_:management in 2122) [ClassicSimilarity], result of:
          0.032236047 = score(doc=2122,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 2122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2122)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 44(2008) no.5, S.1741-1753
  14. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.01
    0.010745349 = product of:
      0.032236047 = sum of:
        0.032236047 = weight(_text_:management in 2667) [ClassicSimilarity], result of:
          0.032236047 = score(doc=2667,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 2667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2667)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 52(2016) no.1, S.5-19
  15. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.010745349 = product of:
      0.032236047 = sum of:
        0.032236047 = weight(_text_:management in 2669) [ClassicSimilarity], result of:
          0.032236047 = score(doc=2669,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 52(2016) no.1, S.139-162
  16. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.009750179 = product of:
      0.029250536 = sum of:
        0.029250536 = product of:
          0.058501072 = sum of:
            0.058501072 = weight(_text_:system in 4471) [ClassicSimilarity], result of:
              0.058501072 = score(doc=4471,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.36163113 = fieldWeight in 4471, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
  17. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.009287819 = product of:
      0.027863456 = sum of:
        0.027863456 = product of:
          0.055726912 = sum of:
            0.055726912 = weight(_text_:system in 3413) [ClassicSimilarity], result of:
              0.055726912 = score(doc=3413,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.34448233 = fieldWeight in 3413, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
  18. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.00927861 = product of:
      0.02783583 = sum of:
        0.02783583 = product of:
          0.05567166 = sum of:
            0.05567166 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.05567166 = score(doc=251,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 5.2021 12:43:05
  19. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.01
    0.00859628 = product of:
      0.025788838 = sum of:
        0.025788838 = weight(_text_:management in 2671) [ClassicSimilarity], result of:
          0.025788838 = score(doc=2671,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.14896142 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 52(2016) no.1, S.61-72
  20. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.0069589573 = product of:
      0.020876871 = sum of:
        0.020876871 = product of:
          0.041753743 = sum of:
            0.041753743 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.041753743 = score(doc=6525,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18