Search (157 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.06
    0.05530905 = product of:
      0.1106181 = sum of:
        0.016912218 = weight(_text_:information in 5835) [ClassicSimilarity], result of:
          0.016912218 = score(doc=5835,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.27429342 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.01934592 = weight(_text_:for in 5835) [ClassicSimilarity], result of:
          0.01934592 = score(doc=5835,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.29336601 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.013661366 = weight(_text_:the in 5835) [ClassicSimilarity], result of:
          0.013661366 = score(doc=5835,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.24652568 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.023243912 = weight(_text_:of in 5835) [ClassicSimilarity], result of:
          0.023243912 = score(doc=5835,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.42320424 = fieldWeight in 5835, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.013661366 = weight(_text_:the in 5835) [ClassicSimilarity], result of:
          0.013661366 = score(doc=5835,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.24652568 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.023793312 = product of:
          0.047586624 = sum of:
            0.047586624 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.047586624 = score(doc=5835,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  2. Campbell, G.: Queer theory and the creation of contextual subject access tools for gay and lesbian communities (2000) 0.05
    0.051740162 = product of:
      0.103480324 = sum of:
        0.005979372 = weight(_text_:information in 6054) [ClassicSimilarity], result of:
          0.005979372 = score(doc=6054,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.09697737 = fieldWeight in 6054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.01184691 = weight(_text_:for in 6054) [ClassicSimilarity], result of:
          0.01184691 = score(doc=6054,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17964928 = fieldWeight in 6054, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.016019372 = weight(_text_:the in 6054) [ClassicSimilarity], result of:
          0.016019372 = score(doc=6054,freq=22.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.28907698 = fieldWeight in 6054, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.015003879 = weight(_text_:of in 6054) [ClassicSimilarity], result of:
          0.015003879 = score(doc=6054,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27317715 = fieldWeight in 6054, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.016019372 = weight(_text_:the in 6054) [ClassicSimilarity], result of:
          0.016019372 = score(doc=6054,freq=22.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.28907698 = fieldWeight in 6054, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
        0.03861142 = product of:
          0.07722284 = sum of:
            0.07722284 = weight(_text_:communities in 6054) [ClassicSimilarity], result of:
              0.07722284 = score(doc=6054,freq=4.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.4144508 = fieldWeight in 6054, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6054)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    Knowledge organization research has come to question the theoretical distinction between "aboutness" (a document's innate content) and "meaning" (the use to which a document is put). This distinction has relevance beyond Information Studies, particularly in relation to homosexual concerns. Literary criticism, in particular, frequently addresses the question: when is a work "about" homosexuality? This paper explores this literary debate and its implications for the design of subject access systems for gay and lesbian communities. By examining the literary criticism of Herman Melville's Billy Budd, particularly in relation to the theories of Eve Kosofsky Sedgwick in The Epistemology of the Closet (1990), this paper exposes three tensions that designers of gay and lesbian classifications and vocabularies can expect to face. First is a tension between essentialist and constructivist views of homosexuality, which will affect the choice of terms, categories, and references. Second is a tension between minoritizing and universalizing perspectives on homosexuality. Third is a redefined distinction between aboutness and meaning, in which aboutness refers not to stable document content, but to the system designer's inescapable social and ideological perspectives. Designers of subject access systems can therefore expect to work in a context of intense scrutiny and persistent controversy
  3. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.05
    0.05081343 = product of:
      0.10162686 = sum of:
        0.014646411 = weight(_text_:information in 4888) [ClassicSimilarity], result of:
          0.014646411 = score(doc=4888,freq=12.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.23754507 = fieldWeight in 4888, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.00967296 = weight(_text_:for in 4888) [ClassicSimilarity], result of:
          0.00967296 = score(doc=4888,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14668301 = fieldWeight in 4888, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.021053577 = weight(_text_:the in 4888) [ClassicSimilarity], result of:
          0.021053577 = score(doc=4888,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37992156 = fieldWeight in 4888, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.018375926 = weight(_text_:of in 4888) [ClassicSimilarity], result of:
          0.018375926 = score(doc=4888,freq=30.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.33457235 = fieldWeight in 4888, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.021053577 = weight(_text_:the in 4888) [ClassicSimilarity], result of:
          0.021053577 = score(doc=4888,freq=38.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.37992156 = fieldWeight in 4888, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.016824411 = product of:
          0.033648822 = sum of:
            0.033648822 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.033648822 = score(doc=4888,freq=4.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  4. Berinstein, P.: Moving multimedia : the information value in images (1997) 0.05
    0.047454055 = product of:
      0.11388974 = sum of:
        0.019133992 = weight(_text_:information in 2489) [ClassicSimilarity], result of:
          0.019133992 = score(doc=2489,freq=8.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.3103276 = fieldWeight in 2489, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
        0.010943705 = weight(_text_:for in 2489) [ClassicSimilarity], result of:
          0.010943705 = score(doc=2489,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 2489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
        0.031863503 = weight(_text_:the in 2489) [ClassicSimilarity], result of:
          0.031863503 = score(doc=2489,freq=34.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.5749917 = fieldWeight in 2489, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
        0.020085035 = weight(_text_:of in 2489) [ClassicSimilarity], result of:
          0.020085035 = score(doc=2489,freq=14.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.36569026 = fieldWeight in 2489, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
        0.031863503 = weight(_text_:the in 2489) [ClassicSimilarity], result of:
          0.031863503 = score(doc=2489,freq=34.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.5749917 = fieldWeight in 2489, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2489)
      0.41666666 = coord(5/12)
    
    Abstract
    Considers the role of pictures in information communication, comparing the way it conveys information with text. Categorises the purposes of images as conveyors of information: the instructional image, the documentary image, the location image, the graphical representation of numbers, the concepts image, the image making the unseen visible, the image as a surrogate for an object or document, the decorative image, the image as a statement, the strong image and the emotional image. Gives examples of how the value of images is being recognised and of how they can be used well
  5. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.05
    0.04701594 = product of:
      0.09403188 = sum of:
        0.007175247 = weight(_text_:information in 1416) [ClassicSimilarity], result of:
          0.007175247 = score(doc=1416,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.116372846 = fieldWeight in 1416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.008207779 = weight(_text_:for in 1416) [ClassicSimilarity], result of:
          0.008207779 = score(doc=1416,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.12446466 = fieldWeight in 1416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.023184106 = weight(_text_:the in 1416) [ClassicSimilarity], result of:
          0.023184106 = score(doc=1416,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 1416, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.018004656 = weight(_text_:of in 1416) [ClassicSimilarity], result of:
          0.018004656 = score(doc=1416,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.32781258 = fieldWeight in 1416, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.023184106 = weight(_text_:the in 1416) [ClassicSimilarity], result of:
          0.023184106 = score(doc=1416,freq=32.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.41836792 = fieldWeight in 1416, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.014275986 = product of:
          0.028551972 = sum of:
            0.028551972 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.028551972 = score(doc=1416,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.05
    0.04671897 = product of:
      0.09343794 = sum of:
        0.0059192767 = weight(_text_:information in 133) [ClassicSimilarity], result of:
          0.0059192767 = score(doc=133,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.0960027 = fieldWeight in 133, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.008292837 = weight(_text_:for in 133) [ClassicSimilarity], result of:
          0.008292837 = score(doc=133,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.12575449 = fieldWeight in 133, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.01621478 = weight(_text_:the in 133) [ClassicSimilarity], result of:
          0.01621478 = score(doc=133,freq=46.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.29260322 = fieldWeight in 133, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.013693865 = weight(_text_:of in 133) [ClassicSimilarity], result of:
          0.013693865 = score(doc=133,freq=34.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.2493256 = fieldWeight in 133, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.01621478 = weight(_text_:the in 133) [ClassicSimilarity], result of:
          0.01621478 = score(doc=133,freq=46.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.29260322 = fieldWeight in 133, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
        0.0331024 = product of:
          0.0662048 = sum of:
            0.0662048 = weight(_text_:communities in 133) [ClassicSimilarity], result of:
              0.0662048 = score(doc=133,freq=6.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35531756 = fieldWeight in 133, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
  7. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.04
    0.04468522 = product of:
      0.08937044 = sum of:
        0.016439613 = product of:
          0.049318835 = sum of:
            0.049318835 = weight(_text_:f in 4707) [ClassicSimilarity], result of:
              0.049318835 = score(doc=4707,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.35229704 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4707)
          0.33333334 = coord(1/3)
        0.009566996 = weight(_text_:information in 4707) [ClassicSimilarity], result of:
          0.009566996 = score(doc=4707,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 4707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.015476737 = weight(_text_:for in 4707) [ClassicSimilarity], result of:
          0.015476737 = score(doc=4707,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.23469281 = fieldWeight in 4707, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.01545607 = weight(_text_:the in 4707) [ClassicSimilarity], result of:
          0.01545607 = score(doc=4707,freq=8.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27891195 = fieldWeight in 4707, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.016974952 = weight(_text_:of in 4707) [ClassicSimilarity], result of:
          0.016974952 = score(doc=4707,freq=10.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3090647 = fieldWeight in 4707, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.01545607 = weight(_text_:the in 4707) [ClassicSimilarity], result of:
          0.01545607 = score(doc=4707,freq=8.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27891195 = fieldWeight in 4707, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
      0.5 = coord(6/12)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  8. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.04376505 = product of:
      0.0875301 = sum of:
        0.009566996 = weight(_text_:information in 5830) [ClassicSimilarity], result of:
          0.009566996 = score(doc=5830,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.02188741 = weight(_text_:for in 5830) [ClassicSimilarity], result of:
          0.02188741 = score(doc=5830,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.33190575 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.010929092 = weight(_text_:the in 5830) [ClassicSimilarity], result of:
          0.010929092 = score(doc=5830,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.19722053 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.015182858 = weight(_text_:of in 5830) [ClassicSimilarity], result of:
          0.015182858 = score(doc=5830,freq=8.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27643585 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.010929092 = weight(_text_:the in 5830) [ClassicSimilarity], result of:
          0.010929092 = score(doc=5830,freq=4.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.19722053 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.019034648 = product of:
          0.038069297 = sum of:
            0.038069297 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.038069297 = score(doc=5830,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  9. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.04
    0.043206647 = product of:
      0.086413294 = sum of:
        0.0035876236 = weight(_text_:information in 3651) [ClassicSimilarity], result of:
          0.0035876236 = score(doc=3651,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.058186423 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.02540392 = weight(_text_:dokumentation in 3651) [ClassicSimilarity], result of:
          0.02540392 = score(doc=3651,freq=2.0), product of:
            0.16407113 = queryWeight, product of:
              4.671349 = idf(docFreq=1124, maxDocs=44218)
              0.035122856 = queryNorm
            0.1548348 = fieldWeight in 3651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.671349 = idf(docFreq=1124, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.0058037764 = weight(_text_:for in 3651) [ClassicSimilarity], result of:
          0.0058037764 = score(doc=3651,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.088009804 = fieldWeight in 3651, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.01738808 = weight(_text_:the in 3651) [ClassicSimilarity], result of:
          0.01738808 = score(doc=3651,freq=72.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 3651, product of:
              8.485281 = tf(freq=72.0), with freq of:
                72.0 = termFreq=72.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.016841812 = weight(_text_:of in 3651) [ClassicSimilarity], result of:
          0.016841812 = score(doc=3651,freq=70.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3066406 = fieldWeight in 3651, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.01738808 = weight(_text_:the in 3651) [ClassicSimilarity], result of:
          0.01738808 = score(doc=3651,freq=72.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 3651, product of:
              8.485281 = tf(freq=72.0), with freq of:
                72.0 = termFreq=72.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.5 = coord(6/12)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  10. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.04
    0.041875385 = product of:
      0.08375077 = sum of:
        0.007175247 = weight(_text_:information in 2203) [ClassicSimilarity], result of:
          0.007175247 = score(doc=2203,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.116372846 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.011437457 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.011437457 = score(doc=2203,freq=2.0), product of:
            0.07784514 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.035122856 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.016415559 = weight(_text_:for in 2203) [ClassicSimilarity], result of:
          0.016415559 = score(doc=2203,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.24892932 = fieldWeight in 2203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.01738808 = weight(_text_:the in 2203) [ClassicSimilarity], result of:
          0.01738808 = score(doc=2203,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 2203, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.013946345 = weight(_text_:of in 2203) [ClassicSimilarity], result of:
          0.013946345 = score(doc=2203,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.25392252 = fieldWeight in 2203, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.01738808 = weight(_text_:the in 2203) [ClassicSimilarity], result of:
          0.01738808 = score(doc=2203,freq=18.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.31377596 = fieldWeight in 2203, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.5 = coord(6/12)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.348-369
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.04
    0.040540468 = product of:
      0.081080936 = sum of:
        0.009566996 = weight(_text_:information in 2671) [ClassicSimilarity], result of:
          0.009566996 = score(doc=2671,freq=8.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 2671, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.0077383686 = weight(_text_:for in 2671) [ClassicSimilarity], result of:
          0.0077383686 = score(doc=2671,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.117346406 = fieldWeight in 2671, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.014965276 = weight(_text_:the in 2671) [ClassicSimilarity], result of:
          0.014965276 = score(doc=2671,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27005535 = fieldWeight in 2671, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.012003103 = weight(_text_:of in 2671) [ClassicSimilarity], result of:
          0.012003103 = score(doc=2671,freq=20.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.21854173 = fieldWeight in 2671, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.014965276 = weight(_text_:the in 2671) [ClassicSimilarity], result of:
          0.014965276 = score(doc=2671,freq=30.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.27005535 = fieldWeight in 2671, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.021841917 = product of:
          0.043683834 = sum of:
            0.043683834 = weight(_text_:communities in 2671) [ClassicSimilarity], result of:
              0.043683834 = score(doc=2671,freq=2.0), product of:
                0.18632571 = queryWeight, product of:
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.035122856 = queryNorm
                0.23444878 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3049703 = idf(docFreq=596, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2671)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
    Source
    Information processing and management. 52(2016) no.1, S.61-72
  12. Tibbo, H.R.: Abstracting across the disciplines : a content analysis of abstracts for the natural sciences, the social sciences, and the humanities with implications for abstracting standards and online information retrieval (1992) 0.04
    0.04037315 = product of:
      0.09689556 = sum of:
        0.013529775 = weight(_text_:information in 2536) [ClassicSimilarity], result of:
          0.013529775 = score(doc=2536,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.21943474 = fieldWeight in 2536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
        0.018955056 = weight(_text_:for in 2536) [ClassicSimilarity], result of:
          0.018955056 = score(doc=2536,freq=6.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.28743884 = fieldWeight in 2536, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
        0.025630994 = weight(_text_:the in 2536) [ClassicSimilarity], result of:
          0.025630994 = score(doc=2536,freq=22.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.46252316 = fieldWeight in 2536, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
        0.013148742 = weight(_text_:of in 2536) [ClassicSimilarity], result of:
          0.013148742 = score(doc=2536,freq=6.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.23940048 = fieldWeight in 2536, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
        0.025630994 = weight(_text_:the in 2536) [ClassicSimilarity], result of:
          0.025630994 = score(doc=2536,freq=22.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.46252316 = fieldWeight in 2536, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
      0.41666666 = coord(5/12)
    
    Abstract
    Reports on a comparison of the "content categories" listed in the ANSI/ISO abstracting standards to actual content found in abstracts from the sciences, social sciences, and the humanities. The preliminary findings question the fundamental concept underlying these standards, namely, that any one set of standards and generalized instructions can describe and elicit the optimal configuration for abstracts from all subject areas
    Source
    Library and information science research. 14(1992) no.1, S.31-56
  13. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.04
    0.037596628 = product of:
      0.09023191 = sum of:
        0.009566996 = weight(_text_:information in 2926) [ClassicSimilarity], result of:
          0.009566996 = score(doc=2926,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 2926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
        0.015476737 = weight(_text_:for in 2926) [ClassicSimilarity], result of:
          0.015476737 = score(doc=2926,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.23469281 = fieldWeight in 2926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
        0.021858184 = weight(_text_:the in 2926) [ClassicSimilarity], result of:
          0.021858184 = score(doc=2926,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39444107 = fieldWeight in 2926, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
        0.021471804 = weight(_text_:of in 2926) [ClassicSimilarity], result of:
          0.021471804 = score(doc=2926,freq=16.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.39093933 = fieldWeight in 2926, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
        0.021858184 = weight(_text_:the in 2926) [ClassicSimilarity], result of:
          0.021858184 = score(doc=2926,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39444107 = fieldWeight in 2926, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
      0.41666666 = coord(5/12)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  14. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.04
    0.037567567 = product of:
      0.075135134 = sum of:
        0.010274758 = product of:
          0.030824272 = sum of:
            0.030824272 = weight(_text_:f in 2669) [ClassicSimilarity], result of:
              0.030824272 = score(doc=2669,freq=2.0), product of:
                0.13999219 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.035122856 = queryNorm
                0.22018565 = fieldWeight in 2669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2669)
          0.33333334 = coord(1/3)
        0.008456109 = weight(_text_:information in 2669) [ClassicSimilarity], result of:
          0.008456109 = score(doc=2669,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.13714671 = fieldWeight in 2669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.006839816 = weight(_text_:for in 2669) [ClassicSimilarity], result of:
          0.006839816 = score(doc=2669,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.103720546 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.018072287 = weight(_text_:the in 2669) [ClassicSimilarity], result of:
          0.018072287 = score(doc=2669,freq=28.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3261228 = fieldWeight in 2669, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.013419878 = weight(_text_:of in 2669) [ClassicSimilarity], result of:
          0.013419878 = score(doc=2669,freq=16.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.24433708 = fieldWeight in 2669, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.018072287 = weight(_text_:the in 2669) [ClassicSimilarity], result of:
          0.018072287 = score(doc=2669,freq=28.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3261228 = fieldWeight in 2669, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.5 = coord(6/12)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
    Source
    Information processing and management. 52(2016) no.1, S.139-162
  15. Svenonius, E.: Access to nonbook materials : the limits of subject indexing for visual and aural languages (1994) 0.04
    0.036628757 = product of:
      0.08790902 = sum of:
        0.009566996 = weight(_text_:information in 8263) [ClassicSimilarity], result of:
          0.009566996 = score(doc=8263,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1551638 = fieldWeight in 8263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=8263)
        0.02188741 = weight(_text_:for in 8263) [ClassicSimilarity], result of:
          0.02188741 = score(doc=8263,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.33190575 = fieldWeight in 8263, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=8263)
        0.018929742 = weight(_text_:the in 8263) [ClassicSimilarity], result of:
          0.018929742 = score(doc=8263,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34159598 = fieldWeight in 8263, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=8263)
        0.018595127 = weight(_text_:of in 8263) [ClassicSimilarity], result of:
          0.018595127 = score(doc=8263,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.33856338 = fieldWeight in 8263, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=8263)
        0.018929742 = weight(_text_:the in 8263) [ClassicSimilarity], result of:
          0.018929742 = score(doc=8263,freq=12.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.34159598 = fieldWeight in 8263, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=8263)
      0.41666666 = coord(5/12)
    
    Abstract
    An examination of some nonbook materials with respect to an aboutness model of indexing leads to the conclusion that there are instances that defy subject indexing. These occur not so much because of the nature of the medium per se but because it is being used for nondocumentary purposes, or, when being used for such purposes, the subject referenced is nonlexical
    Source
    Journal of the American Society for Information Science. 45(1994) no.8, S.600-606
  16. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.04
    0.036555033 = product of:
      0.07311007 = sum of:
        0.0035876236 = weight(_text_:information in 2293) [ClassicSimilarity], result of:
          0.0035876236 = score(doc=2293,freq=2.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.058186423 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.010857872 = weight(_text_:for in 2293) [ClassicSimilarity], result of:
          0.010857872 = score(doc=2293,freq=14.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16465127 = fieldWeight in 2293, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.018098084 = weight(_text_:the in 2293) [ClassicSimilarity], result of:
          0.018098084 = score(doc=2293,freq=78.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.32658833 = fieldWeight in 2293, product of:
              8.83176 = tf(freq=78.0), with freq of:
                78.0 = termFreq=78.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.0153304115 = weight(_text_:of in 2293) [ClassicSimilarity], result of:
          0.0153304115 = score(doc=2293,freq=58.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.27912235 = fieldWeight in 2293, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.018098084 = weight(_text_:the in 2293) [ClassicSimilarity], result of:
          0.018098084 = score(doc=2293,freq=78.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.32658833 = fieldWeight in 2293, product of:
              8.83176 = tf(freq=78.0), with freq of:
                78.0 = termFreq=78.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.007137993 = product of:
          0.014275986 = sum of:
            0.014275986 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.014275986 = score(doc=2293,freq=2.0), product of:
                0.12299426 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035122856 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(6/12)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  17. Bland, R.N.: ¬The concept of intellectual level in cataloging and classification (1983) 0.04
    0.03616041 = product of:
      0.08678498 = sum of:
        0.013529775 = weight(_text_:information in 321) [ClassicSimilarity], result of:
          0.013529775 = score(doc=321,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.21943474 = fieldWeight in 321, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
        0.010943705 = weight(_text_:for in 321) [ClassicSimilarity], result of:
          0.010943705 = score(doc=321,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.16595288 = fieldWeight in 321, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
        0.021858184 = weight(_text_:the in 321) [ClassicSimilarity], result of:
          0.021858184 = score(doc=321,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39444107 = fieldWeight in 321, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
        0.018595127 = weight(_text_:of in 321) [ClassicSimilarity], result of:
          0.018595127 = score(doc=321,freq=12.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.33856338 = fieldWeight in 321, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
        0.021858184 = weight(_text_:the in 321) [ClassicSimilarity], result of:
          0.021858184 = score(doc=321,freq=16.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.39444107 = fieldWeight in 321, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
      0.41666666 = coord(5/12)
    
    Abstract
    This paper traces the history of the concept of intellectual level in cataloging and classification in the United States. Past cataloging codes, subject-heading practice, and classification systems have provided library users with little systematic information concerning the intellectual level or intended audience of works. Reasons for this omission are discussed, and arguments are developed to show that this kind of information would be a useful addition to the catalog record of the present and the future.
  18. Hutchins, W.J.: ¬The concept of 'aboutness' in subject indexing (1978) 0.04
    0.03592159 = product of:
      0.086211815 = sum of:
        0.0118385535 = weight(_text_:information in 1961) [ClassicSimilarity], result of:
          0.0118385535 = score(doc=1961,freq=4.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.1920054 = fieldWeight in 1961, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
        0.009575742 = weight(_text_:for in 1961) [ClassicSimilarity], result of:
          0.009575742 = score(doc=1961,freq=2.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.14520876 = fieldWeight in 1961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
        0.02138342 = weight(_text_:the in 1961) [ClassicSimilarity], result of:
          0.02138342 = score(doc=1961,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3858737 = fieldWeight in 1961, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
        0.022030683 = weight(_text_:of in 1961) [ClassicSimilarity], result of:
          0.022030683 = score(doc=1961,freq=22.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.40111488 = fieldWeight in 1961, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
        0.02138342 = weight(_text_:the in 1961) [ClassicSimilarity], result of:
          0.02138342 = score(doc=1961,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3858737 = fieldWeight in 1961, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
      0.41666666 = coord(5/12)
    
    Abstract
    The common view of the 'aboutness' of documents is that the index entries (or classifications) assigned to documents represent or indicate in some way the total contents of documents; indexing and classifying are seen as processes involving the 'summerization' of the texts of documents. In this paper an alternative concept of 'aboutness' is proposed based on an analysis of the linguistic organization of texts, which is felt to be more appropriate in many indexing environments (particularly in non-specialized libraries and information services) and which has implications for the evaluation of the effectiveness of indexing systems
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.93-97.
  19. Bednarek, M.: Intellectual access to pictorial information (1993) 0.04
    0.035866756 = product of:
      0.086080216 = sum of:
        0.012427893 = weight(_text_:information in 5631) [ClassicSimilarity], result of:
          0.012427893 = score(doc=5631,freq=6.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.20156369 = fieldWeight in 5631, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.016415559 = weight(_text_:for in 5631) [ClassicSimilarity], result of:
          0.016415559 = score(doc=5631,freq=8.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.24892932 = fieldWeight in 5631, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.020078024 = weight(_text_:the in 5631) [ClassicSimilarity], result of:
          0.020078024 = score(doc=5631,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36231726 = fieldWeight in 5631, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.017080715 = weight(_text_:of in 5631) [ClassicSimilarity], result of:
          0.017080715 = score(doc=5631,freq=18.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3109903 = fieldWeight in 5631, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
        0.020078024 = weight(_text_:the in 5631) [ClassicSimilarity], result of:
          0.020078024 = score(doc=5631,freq=24.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.36231726 = fieldWeight in 5631, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=5631)
      0.41666666 = coord(5/12)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  20. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.04
    0.035651516 = product of:
      0.085563645 = sum of:
        0.017575694 = weight(_text_:information in 1958) [ClassicSimilarity], result of:
          0.017575694 = score(doc=1958,freq=12.0), product of:
            0.0616574 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.035122856 = queryNorm
            0.2850541 = fieldWeight in 1958, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
        0.011607553 = weight(_text_:for in 1958) [ClassicSimilarity], result of:
          0.011607553 = score(doc=1958,freq=4.0), product of:
            0.06594466 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.035122856 = queryNorm
            0.17601961 = fieldWeight in 1958, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
        0.018328644 = weight(_text_:the in 1958) [ClassicSimilarity], result of:
          0.018328644 = score(doc=1958,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 1958, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
        0.019723112 = weight(_text_:of in 1958) [ClassicSimilarity], result of:
          0.019723112 = score(doc=1958,freq=24.0), product of:
            0.054923624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.035122856 = queryNorm
            0.3591007 = fieldWeight in 1958, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
        0.018328644 = weight(_text_:the in 1958) [ClassicSimilarity], result of:
          0.018328644 = score(doc=1958,freq=20.0), product of:
            0.05541559 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.035122856 = queryNorm
            0.3307489 = fieldWeight in 1958, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=1958)
      0.41666666 = coord(5/12)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1383-1392

Authors

Languages

  • e 132
  • d 23
  • f 2
  • More… Less…

Types

  • a 136
  • m 11
  • x 5
  • el 3
  • d 2
  • n 2
  • s 1
  • More… Less…