Search (7 results, page 1 of 1)

  • × theme_ss:"Social tagging"
  • × year_i:[2010 TO 2020}
  1. Rorissa, A.: ¬A comparative study of Flickr tags and index terms in a general image collection (2010) 0.03
    0.02875246 = product of:
      0.14376229 = sum of:
        0.14376229 = weight(_text_:index in 4100) [ClassicSimilarity], result of:
          0.14376229 = score(doc=4100,freq=14.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.63867813 = fieldWeight in 4100, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4100)
      0.2 = coord(1/5)
    
    Abstract
    Web 2.0 and social/collaborative tagging have altered the traditional roles of indexer and user. Traditional indexing tools and systems assume the top-down approach to indexing in which a trained professional is responsible for assigning index terms to information sources with a potential user in mind. However, in today's Web, end users create, organize, index, and search for images and other information sources through social tagging and other collaborative activities. One of the impediments to user-centered indexing had been the cost of soliciting user-generated index terms or tags. Social tagging of images such as those on Flickr, an online photo management and sharing application, presents an opportunity that can be seized by designers of indexing tools and systems to bridge the semantic gap between indexer terms and user vocabularies. Empirical research on the differences and similarities between user-generated tags and index terms based on controlled vocabularies has the potential to inform future design of image indexing tools and systems. Toward this end, a random sample of Flickr images and the tags assigned to them were content analyzed and compared with another sample of index terms from a general image collection using established frameworks for image attributes and contents. The results show that there is a fundamental difference between the types of tags and types of index terms used. In light of this, implications for research into and design of user-centered image indexing tools and systems are discussed.
  2. Kipp, M.E.I.; Campbell, D.G.: Searching with tags : do tags help users find things? (2010) 0.01
    0.010867408 = product of:
      0.054337036 = sum of:
        0.054337036 = weight(_text_:index in 4064) [ClassicSimilarity], result of:
          0.054337036 = score(doc=4064,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.24139762 = fieldWeight in 4064, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4064)
      0.2 = coord(1/5)
    
    Abstract
    The question of whether tags can be useful in the process of information retrieval was examined in this pilot study. Many tags are subject related and could work well as index terms or entry vocabulary; however, folksonomies also include relationships that are traditionally not included in controlled vocabularies including affective or time and task related tags and the user name of the tagger. Participants searched a social bookmarking tool, specialising in academic articles (CiteULike), and an online journal database (Pubmed) for articles relevant to a given information request. Screen capture software was used to collect participant actions and a semi-structured interview asked them to describe their search process. Preliminary results showed that participants did use tags in their search process, as a guide to searching and as hyperlinks to potentially useful articles. However, participants also used controlled vocabularies in the journal database to locate useful search terms and links to related articles supplied by Pubmed. Additionally, participants reported using user names of taggers and group names to help select resources by relevance. The inclusion of subjective and social information from the taggers is very different from the traditional objectivity of indexing and was reported as an asset by a number of participants. This study suggests that while users value social and subjective factors when searching, they also find utility in objective factors such as subject headings. Most importantly, users are interested in the ability of systems to connect them with related articles whether via subject access or other means.
  3. Knautz, K.; Stock, W.G.: Collective indexing of emotions in videos (2011) 0.01
    0.010867408 = product of:
      0.054337036 = sum of:
        0.054337036 = weight(_text_:index in 295) [ClassicSimilarity], result of:
          0.054337036 = score(doc=295,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.24139762 = fieldWeight in 295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=295)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The object of this empirical research study is emotion, as depicted and aroused in videos. This paper seeks to answer the questions: Are users able to index such emotions consistently? Are the users' votes usable for emotional video retrieval? Design/methodology/approach - The authors worked with a controlled vocabulary for nine basic emotions (love, happiness, fun, surprise, desire, sadness, anger, disgust and fear), a slide control for adjusting the emotions' intensity, and the approach of broad folksonomies. Different users tagged the same videos. The test persons had the task of indexing the emotions of 20 videos (reprocessed clips from YouTube). The authors distinguished between emotions which were depicted in the video and those that were evoked in the user. Data were received from 776 participants and a total of 279,360 slide control values were analyzed. Findings - The consistency of the users' votes is very high; the tag distributions for the particular videos' emotions are stable. The final shape of the distributions will be reached by the tagging activities of only very few users (less than 100). By applying the approach of power tags it is possible to separate the pivotal emotions of every document - if indeed there is any feeling at all. Originality/value - This paper is one of the first steps in the new research area of emotional information retrieval (EmIR). To the authors' knowledge, it is the first research project into the collective indexing of emotions in videos.
  4. Niemann, C.: Tag-Science : Ein Analysemodell zur Nutzbarkeit von Tagging-Daten (2011) 0.01
    0.008374932 = product of:
      0.04187466 = sum of:
        0.04187466 = weight(_text_:22 in 164) [ClassicSimilarity], result of:
          0.04187466 = score(doc=164,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.23214069 = fieldWeight in 164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=164)
      0.2 = coord(1/5)
    
    Source
    ¬Die Kraft der digitalen Unordnung: 32. Arbeits- und Fortbildungstagung der ASpB e. V., Sektion 5 im Deutschen Bibliotheksverband, 22.-25. September 2009 in der Universität Karlsruhe. Hrsg: Jadwiga Warmbrunn u.a
  5. Yi, K.: Harnessing collective intelligence in social tagging using Delicious (2012) 0.01
    0.00697911 = product of:
      0.03489555 = sum of:
        0.03489555 = weight(_text_:22 in 515) [ClassicSimilarity], result of:
          0.03489555 = score(doc=515,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.19345059 = fieldWeight in 515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=515)
      0.2 = coord(1/5)
    
    Date
    25.12.2012 15:22:37
  6. Choi, Y.; Syn, S.Y.: Characteristics of tagging behavior in digitized humanities online collections (2016) 0.01
    0.00697911 = product of:
      0.03489555 = sum of:
        0.03489555 = weight(_text_:22 in 2891) [ClassicSimilarity], result of:
          0.03489555 = score(doc=2891,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.19345059 = fieldWeight in 2891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2891)
      0.2 = coord(1/5)
    
    Date
    21. 4.2016 11:23:22
  7. Qin, C.; Liu, Y.; Mou, J.; Chen, J.: User adoption of a hybrid social tagging approach in an online knowledge community (2019) 0.01
    0.00697911 = product of:
      0.03489555 = sum of:
        0.03489555 = weight(_text_:22 in 5492) [ClassicSimilarity], result of:
          0.03489555 = score(doc=5492,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.19345059 = fieldWeight in 5492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5492)
      0.2 = coord(1/5)
    
    Date
    20. 1.2015 18:30:22