Search (49 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.07
    0.06548494 = product of:
      0.16371235 = sum of:
        0.060152818 = weight(_text_:wide in 5589) [ClassicSimilarity], result of:
          0.060152818 = score(doc=5589,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.29372054 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.10355954 = sum of:
          0.06598533 = weight(_text_:research in 5589) [ClassicSimilarity], result of:
            0.06598533 = score(doc=5589,freq=14.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.5003853 = fieldWeight in 5589, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.037574213 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.037574213 = score(doc=5589,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.4 = coord(2/5)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.05
    0.046696465 = product of:
      0.11674116 = sum of:
        0.05012735 = weight(_text_:wide in 4972) [ClassicSimilarity], result of:
          0.05012735 = score(doc=4972,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.24476713 = fieldWeight in 4972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
        0.06661381 = weight(_text_:web in 4972) [ClassicSimilarity], result of:
          0.06661381 = score(doc=4972,freq=12.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.4416067 = fieldWeight in 4972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.4 = coord(2/5)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  3. Rosso, M.A.: User-based identification of Web genres (2008) 0.03
    0.032937143 = product of:
      0.082342856 = sum of:
        0.07195114 = weight(_text_:web in 1863) [ClassicSimilarity], result of:
          0.07195114 = score(doc=1863,freq=14.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.47698978 = fieldWeight in 1863, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1863)
        0.010391714 = product of:
          0.020783428 = sum of:
            0.020783428 = weight(_text_:research in 1863) [ClassicSimilarity], result of:
              0.020783428 = score(doc=1863,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.15760657 = fieldWeight in 1863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1863)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This research explores the use of genre as a document descriptor in order to improve the effectiveness of Web searching. A major issue to be resolved is the identification of what document categories should be used as genres. As genre is a kind of folk typology, document categories must enjoy widespread recognition by their intended user groups in order to qualify as genres. Three user studies were conducted to develop a genre palette and show that it is recognizable to users. (Palette is a term used to denote a classification, attributable to Karlgren, Bretan, Dewe, Hallberg, and Wolkert, 1998.) To simplify the users' classification task, it was decided to focus on Web pages from the edu domain. The first study was a survey of user terminology for Web pages. Three participants separated 100 Web page printouts into stacks according to genre, assigning names and definitions to each genre. The second study aimed to refine the resulting set of 48 (often conceptually and lexically similar) genre names and definitions into a smaller palette of user-preferred terminology. Ten participants classified the same 100 Web pages. A set of five principles for creating a genre palette from individuals' sortings was developed, and the list of 48 was trimmed to 18 genres. The third study aimed to show that users would agree on the genres of Web pages when choosing from the genre palette. In an online experiment in which 257 participants categorized a new set of 55 pages using the 18 genres, on average, over 70% agreed on the genre of each page. Suggestions for improving the genre palette and future directions for the work are discussed.
  4. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.024281621 = product of:
      0.121408105 = sum of:
        0.121408105 = sum of:
          0.05878441 = weight(_text_:research in 5835) [ClassicSimilarity], result of:
            0.05878441 = score(doc=5835,freq=4.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.44577867 = fieldWeight in 5835, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.062623695 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.062623695 = score(doc=5835,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.2 = coord(1/5)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  5. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.02
    0.021693096 = product of:
      0.05423274 = sum of:
        0.032633968 = weight(_text_:web in 4444) [ClassicSimilarity], result of:
          0.032633968 = score(doc=4444,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.21634221 = fieldWeight in 4444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4444)
        0.021598773 = product of:
          0.043197546 = sum of:
            0.043197546 = weight(_text_:research in 4444) [ClassicSimilarity], result of:
              0.043197546 = score(doc=4444,freq=6.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.3275791 = fieldWeight in 4444, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4444)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
  6. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.019425297 = product of:
      0.097126484 = sum of:
        0.097126484 = sum of:
          0.04702753 = weight(_text_:research in 5830) [ClassicSimilarity], result of:
            0.04702753 = score(doc=5830,freq=4.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.35662293 = fieldWeight in 5830, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.050098952 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.050098952 = score(doc=5830,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.2 = coord(1/5)
    
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  7. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.012502866 = product of:
      0.06251433 = sum of:
        0.06251433 = sum of:
          0.024940113 = weight(_text_:research in 1416) [ClassicSimilarity], result of:
            0.024940113 = score(doc=1416,freq=2.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.18912788 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
          0.037574213 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
            0.037574213 = score(doc=1416,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.23214069 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
      0.2 = coord(1/5)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  8. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.01
    0.0100254705 = product of:
      0.05012735 = sum of:
        0.05012735 = weight(_text_:wide in 2667) [ClassicSimilarity], result of:
          0.05012735 = score(doc=2667,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.24476713 = fieldWeight in 2667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2667)
      0.2 = coord(1/5)
    
    Abstract
    Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
  9. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.009420616 = product of:
      0.047103077 = sum of:
        0.047103077 = weight(_text_:web in 2669) [ClassicSimilarity], result of:
          0.047103077 = score(doc=2669,freq=6.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.3122631 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
  10. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.01
    0.009405506 = product of:
      0.04702753 = sum of:
        0.04702753 = product of:
          0.09405506 = sum of:
            0.09405506 = weight(_text_:research in 7510) [ClassicSimilarity], result of:
              0.09405506 = score(doc=7510,freq=4.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.71324587 = fieldWeight in 7510, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Library and information science research. 12(1990) no.3, S.251-262
  11. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.008745444 = product of:
      0.04372722 = sum of:
        0.04372722 = sum of:
          0.024940113 = weight(_text_:research in 2293) [ClassicSimilarity], result of:
            0.024940113 = score(doc=2293,freq=8.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.18912788 = fieldWeight in 2293, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.018787106 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.018787106 = score(doc=2293,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.2 = coord(1/5)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  12. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.01
    0.007054129 = product of:
      0.035270646 = sum of:
        0.035270646 = product of:
          0.07054129 = sum of:
            0.07054129 = weight(_text_:research in 1329) [ClassicSimilarity], result of:
              0.07054129 = score(doc=1329,freq=4.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.5349344 = fieldWeight in 1329, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  13. Allen, R.B.; Wu, Y.: Metrics for the scope of a collection (2005) 0.01
    0.006526794 = product of:
      0.032633968 = sum of:
        0.032633968 = weight(_text_:web in 4570) [ClassicSimilarity], result of:
          0.032633968 = score(doc=4570,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.21634221 = fieldWeight in 4570, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4570)
      0.2 = coord(1/5)
    
    Abstract
    Some collections cover many topics, while others are narrowly focused an a limited number of topics. We introduce the concept of the "scope" of a collection of documents and we compare two ways of measuring lt. These measures are based an the distances between documents. The first uses the overlap of words between pairs of documents. The second measure uses a novel method that calculates the semantic relatedness to pairs of words from the documents. Those values are combined to obtain an overall distance between the documents. The main validation for the measures compared Web pages categorized by Yahoo. Sets of pages sampied from broad categories were determined to have a higher scope than sets derived from subcategories. The measure was significant and confirmed the expected difference in scope. Finally, we discuss other measures related to scope.
  14. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.01
    0.0060152817 = product of:
      0.030076409 = sum of:
        0.030076409 = weight(_text_:wide in 3651) [ClassicSimilarity], result of:
          0.030076409 = score(doc=3651,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.14686027 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.2 = coord(1/5)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
  15. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.01
    0.005039714 = product of:
      0.025198568 = sum of:
        0.025198568 = product of:
          0.050397135 = sum of:
            0.050397135 = weight(_text_:research in 5175) [ClassicSimilarity], result of:
              0.050397135 = score(doc=5175,freq=6.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.38217562 = fieldWeight in 5175, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Recent research investigated image attributes as reported by participants in describing, sorting, and searching tasks with images and defined 46 specific image attributes which were then organized into 12 major classes. Attributes were also grouped as being 'perceptual' (directly stimulated by visual percepts), 'interpretive' (requiring inference from visual percepts), and 'reactive' (cognitive and affective responses to the images). This research describes the coverage of two image indexing and classification systems and one general classification system in relation to the previous findings and analyzes the extent to which components of these systems are capable of describing the range of image attributes as revealed by the previous research
  16. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.005009895 = product of:
      0.025049476 = sum of:
        0.025049476 = product of:
          0.050098952 = sum of:
            0.050098952 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.050098952 = score(doc=251,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 5.2021 12:43:05
  17. Todd, R.J.: Subject access: what's it all about? : some research findings (1993) 0.00
    0.004702753 = product of:
      0.023513764 = sum of:
        0.023513764 = product of:
          0.04702753 = sum of:
            0.04702753 = weight(_text_:research in 8193) [ClassicSimilarity], result of:
              0.04702753 = score(doc=8193,freq=4.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.35662293 = fieldWeight in 8193, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8193)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Describes some findings of research conducted into activities related to the process of deciding subjects of documents which sought to identify the goals and intentions of indexers in determining subjects; specific strategies and prescriptions indexers actually use to determine subjects; and some of the variables which impact on the process of determining subjects
  18. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.00
    0.0044281636 = product of:
      0.022140818 = sum of:
        0.022140818 = product of:
          0.044281635 = sum of:
            0.044281635 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.044281635 = score(doc=4888,freq=4.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  19. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.00
    0.0043511963 = product of:
      0.02175598 = sum of:
        0.02175598 = weight(_text_:web in 2671) [ClassicSimilarity], result of:
          0.02175598 = score(doc=2671,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.14422815 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.2 = coord(1/5)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
  20. Nahl-Jakobovits, D.; Jakobovits, L.A.: ¬A content analysis method for developing user-based objectives (1992) 0.00
    0.004156686 = product of:
      0.020783428 = sum of:
        0.020783428 = product of:
          0.041566856 = sum of:
            0.041566856 = weight(_text_:research in 3015) [ClassicSimilarity], result of:
              0.041566856 = score(doc=3015,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.31521314 = fieldWeight in 3015, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3015)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Research strategies. 10(1992) no.1, S.4-16

Years

Languages

  • e 48
  • d 1
  • More… Less…

Types

  • a 47
  • m 2
  • el 1
  • More… Less…