Search (29 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Mayring, P.: Qualitative Inhaltsanalyse : Grundlagen und Techniken (1990) 0.01
    0.014466062 = product of:
      0.14466062 = sum of:
        0.14466062 = weight(_text_:kommunikation in 34) [ClassicSimilarity], result of:
          0.14466062 = score(doc=34,freq=6.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.9836441 = fieldWeight in 34, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.078125 = fieldNorm(doc=34)
      0.1 = coord(1/10)
    
    Abstract
    "Inhaltsanalyse will: Kommunikation analysieren, fixierte Kommunikation analysieren, dabei systematisch vorgehen, das heißt regelgeleitet vorgehen, das heißt auch theoriegeleitet vorgehen, mit dem Ziel, Rückschlüsse auf bestimmte Aspekte der Kommunikation zu ziehen" (S.11)
  2. Rosso, M.A.: User-based identification of Web genres (2008) 0.00
    0.0044538346 = product of:
      0.044538345 = sum of:
        0.044538345 = weight(_text_:web in 1863) [ClassicSimilarity], result of:
          0.044538345 = score(doc=1863,freq=14.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.47698978 = fieldWeight in 1863, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
      0.1 = coord(1/10)
    
    Abstract
    This research explores the use of genre as a document descriptor in order to improve the effectiveness of Web searching. A major issue to be resolved is the identification of what document categories should be used as genres. As genre is a kind of folk typology, document categories must enjoy widespread recognition by their intended user groups in order to qualify as genres. Three user studies were conducted to develop a genre palette and show that it is recognizable to users. (Palette is a term used to denote a classification, attributable to Karlgren, Bretan, Dewe, Hallberg, and Wolkert, 1998.) To simplify the users' classification task, it was decided to focus on Web pages from the edu domain. The first study was a survey of user terminology for Web pages. Three participants separated 100 Web page printouts into stacks according to genre, assigning names and definitions to each genre. The second study aimed to refine the resulting set of 48 (often conceptually and lexically similar) genre names and definitions into a smaller palette of user-preferred terminology. Ten participants classified the same 100 Web pages. A set of five principles for creating a genre palette from individuals' sortings was developed, and the list of 48 was trimmed to 18 genres. The third study aimed to show that users would agree on the genres of Web pages when choosing from the genre palette. In an online experiment in which 257 participants categorized a new set of 55 pages using the 18 genres, on average, over 70% agreed on the genre of each page. Suggestions for improving the genre palette and future directions for the work are discussed.
  3. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.00
    0.0041234493 = product of:
      0.041234493 = sum of:
        0.041234493 = weight(_text_:web in 4972) [ClassicSimilarity], result of:
          0.041234493 = score(doc=4972,freq=12.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.4416067 = fieldWeight in 4972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.1 = coord(1/10)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  4. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.00
    0.0029157193 = product of:
      0.029157192 = sum of:
        0.029157192 = weight(_text_:web in 2669) [ClassicSimilarity], result of:
          0.029157192 = score(doc=2669,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.1 = coord(1/10)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
  5. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.00
    0.0022127612 = product of:
      0.022127612 = sum of:
        0.022127612 = product of:
          0.06638283 = sum of:
            0.06638283 = weight(_text_:29 in 6362) [ClassicSimilarity], result of:
              0.06638283 = score(doc=6362,freq=4.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.6595664 = fieldWeight in 6362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.5, S.339-361
  6. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.00
    0.0022127612 = product of:
      0.022127612 = sum of:
        0.022127612 = product of:
          0.06638283 = sum of:
            0.06638283 = weight(_text_:29 in 822) [ClassicSimilarity], result of:
              0.06638283 = score(doc=822,freq=4.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.6595664 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=822)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
  7. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.00
    0.002086211 = product of:
      0.02086211 = sum of:
        0.02086211 = product of:
          0.06258633 = sum of:
            0.06258633 = weight(_text_:29 in 2387) [ClassicSimilarity], result of:
              0.06258633 = score(doc=2387,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.6218451 = fieldWeight in 2387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2387)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168
  8. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.00
    0.0020200694 = product of:
      0.020200694 = sum of:
        0.020200694 = weight(_text_:web in 4444) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4444,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
      0.1 = coord(1/10)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  9. Allen, R.B.; Wu, Y.: Metrics for the scope of a collection (2005) 0.00
    0.0020200694 = product of:
      0.020200694 = sum of:
        0.020200694 = weight(_text_:web in 4570) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4570,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4570)
      0.1 = coord(1/10)
    
    Abstract
    Some collections cover many topics, while others are narrowly focused an a limited number of topics. We introduce the concept of the "scope" of a collection of documents and we compare two ways of measuring lt. These measures are based an the distances between documents. The first uses the overlap of words between pairs of documents. The second measure uses a novel method that calculates the semantic relatedness to pairs of words from the documents. Those values are combined to obtain an overall distance between the documents. The main validation for the measures compared Web pages categorized by Yahoo. Sets of pages sampied from broad categories were determined to have a higher scope than sets derived from subcategories. The measure was significant and confirmed the expected difference in scope. Finally, we discuss other measures related to scope.
  10. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.00
    0.0013467129 = product of:
      0.013467129 = sum of:
        0.013467129 = weight(_text_:web in 2671) [ClassicSimilarity], result of:
          0.013467129 = score(doc=2671,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.14422815 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.1 = coord(1/10)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
  11. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0012921527 = product of:
      0.012921526 = sum of:
        0.012921526 = product of:
          0.038764577 = sum of:
            0.038764577 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.038764577 = score(doc=5835,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    5. 8.2006 13:22:44
  12. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0010384138 = product of:
      0.010384139 = sum of:
        0.010384139 = product of:
          0.015576207 = sum of:
            0.007823291 = weight(_text_:29 in 1858) [ClassicSimilarity], result of:
              0.007823291 = score(doc=1858,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.07773064 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
            0.007752916 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.007752916 = score(doc=1858,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.6666667 = coord(2/3)
      0.1 = coord(1/10)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
  13. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0010337222 = product of:
      0.010337221 = sum of:
        0.010337221 = product of:
          0.031011663 = sum of:
            0.031011663 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.031011663 = score(doc=5830,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    5. 8.2006 13:22:08
  14. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.00
    0.0010337222 = product of:
      0.010337221 = sum of:
        0.010337221 = product of:
          0.031011663 = sum of:
            0.031011663 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.031011663 = score(doc=251,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 5.2021 12:43:05
  15. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.00
    9.1368984E-4 = product of:
      0.009136898 = sum of:
        0.009136898 = product of:
          0.027410695 = sum of:
            0.027410695 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.027410695 = score(doc=4888,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  16. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.00
    9.1271737E-4 = product of:
      0.009127174 = sum of:
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 6032) [ClassicSimilarity], result of:
              0.027381519 = score(doc=6032,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 6032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6032)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    29. 9.2001 14:03:14
  17. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.00
    9.1271737E-4 = product of:
      0.009127174 = sum of:
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 5187) [ClassicSimilarity], result of:
              0.027381519 = score(doc=5187,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  18. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.00
    9.1271737E-4 = product of:
      0.009127174 = sum of:
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 5497) [ClassicSimilarity], result of:
              0.027381519 = score(doc=5497,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 5497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5497)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    30. 7.2006 14:29:04
  19. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.00
    9.1271737E-4 = product of:
      0.009127174 = sum of:
        0.009127174 = product of:
          0.027381519 = sum of:
            0.027381519 = weight(_text_:29 in 354) [ClassicSimilarity], result of:
              0.027381519 = score(doc=354,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27205724 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    7. 1.2007 13:00:29
  20. Früh, W.: Inhaltsanalyse (2001) 0.00
    7.823291E-4 = product of:
      0.007823291 = sum of:
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 1751) [ClassicSimilarity], result of:
              0.023469873 = score(doc=1751,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 1751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1751)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    24. 3.2008 12:29:34

Languages

  • e 26
  • d 3

Types

  • a 25
  • m 4
  • el 1
  • More… Less…