Search (34 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.02
    0.020023016 = product of:
      0.070080556 = sum of:
        0.025943318 = weight(_text_:management in 2669) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2669,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
        0.04413724 = weight(_text_:case in 2669) [ClassicSimilarity], result of:
          0.04413724 = score(doc=2669,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 2669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2669)
      0.2857143 = coord(2/7)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
    Source
    Information processing and management. 52(2016) no.1, S.139-162
  2. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.02
    0.019933209 = product of:
      0.06976623 = sum of:
        0.052964687 = weight(_text_:case in 6525) [ClassicSimilarity], result of:
          0.052964687 = score(doc=6525,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.29144385 = fieldWeight in 6525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.033603087 = score(doc=6525,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  3. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.02
    0.016140059 = product of:
      0.056490205 = sum of:
        0.036689393 = weight(_text_:management in 4888) [ClassicSimilarity], result of:
          0.036689393 = score(doc=4888,freq=4.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2633291 = fieldWeight in 4888, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.01980081 = product of:
          0.03960162 = sum of:
            0.03960162 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.03960162 = score(doc=4888,freq=4.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  4. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.013615225 = product of:
      0.095306575 = sum of:
        0.095306575 = sum of:
          0.06170349 = weight(_text_:studies in 5589) [ClassicSimilarity], result of:
            0.06170349 = score(doc=5589,freq=4.0), product of:
              0.16494368 = queryWeight, product of:
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.041336425 = queryNorm
              0.37408823 = fieldWeight in 5589, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.033603087 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.033603087 = score(doc=5589,freq=2.0), product of:
              0.14475311 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041336425 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.14285715 = coord(1/7)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  5. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.01
    0.01261064 = product of:
      0.08827448 = sum of:
        0.08827448 = weight(_text_:case in 3549) [ClassicSimilarity], result of:
          0.08827448 = score(doc=3549,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.48573974 = fieldWeight in 3549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
      0.14285715 = coord(1/7)
    
  6. Rorissa, A.: User-generated descriptions of individual images versus labels of groups of images : a comparison using basic level theory (2008) 0.01
    0.012606538 = product of:
      0.044122882 = sum of:
        0.025943318 = weight(_text_:management in 2122) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2122,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2122)
        0.018179566 = product of:
          0.03635913 = sum of:
            0.03635913 = weight(_text_:studies in 2122) [ClassicSimilarity], result of:
              0.03635913 = score(doc=2122,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.22043361 = fieldWeight in 2122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Although images are visual information sources with little or no text associated with them, users still tend to use text to describe images and formulate queries. This is because digital libraries and search engines provide mostly text query options and rely on text annotations for representation and retrieval of the semantic content of images. While the main focus of image research is on indexing and retrieval of individual images, the general topic of image browsing and indexing, and retrieval of groups of images has not been adequately investigated. Comparisons of descriptions of individual images as well as labels of groups of images supplied by users using cognitive models are scarce. This work fills this gap. Using the basic level theory as a framework, a comparison of the descriptions of individual images and labels assigned to groups of images by 180 participants in three studies found a marked difference in their level of abstraction. Results confirm assertions by previous researchers in LIS and other fields that groups of images are labeled using more superordinate level terms while individual image descriptions are mainly at the basic level. Implications for design of image browsing interfaces, taxonomies, thesauri, and similar tools are discussed.
    Source
    Information processing and management. 44(2008) no.5, S.1741-1753
  7. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.009966604 = product of:
      0.034883115 = sum of:
        0.026482344 = weight(_text_:case in 2293) [ClassicSimilarity], result of:
          0.026482344 = score(doc=2293,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.14572193 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.008400772 = product of:
          0.016801544 = sum of:
            0.016801544 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.016801544 = score(doc=2293,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  8. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.01
    0.008917069 = product of:
      0.062419478 = sum of:
        0.062419478 = weight(_text_:case in 2657) [ClassicSimilarity], result of:
          0.062419478 = score(doc=2657,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34346986 = fieldWeight in 2657, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2657)
      0.14285715 = coord(1/7)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
  9. Naun, C.C.: Objectivity and subject access in the print library (2006) 0.01
    0.008827448 = product of:
      0.061792135 = sum of:
        0.061792135 = weight(_text_:case in 236) [ClassicSimilarity], result of:
          0.061792135 = score(doc=236,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.34001783 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=236)
      0.14285715 = coord(1/7)
    
    Abstract
    Librarians have inherited from the print environment a particular way of thinking about subject representation, one based on the conscious identification by librarians of appropriate subject classes and terminology. This conception has played a central role in shaping the profession's characteristic approach to upholding one of its core values: objectivity. It is argued that the social and technological roots of traditional indexing practice are closely intertwined. It is further argued that in traditional library practice objectivity is to be understood as impartiality, and reflects the mediating role that librarians have played in society. The case presented here is not a historical one based on empirical research, but rather a conceptual examination of practices that are already familiar to most librarians.
  10. Chubin, D.E.; Moitra, S.D.: Content analysis of references : adjunct or alternative to citation counting? (1975) 0.01
    0.008310659 = product of:
      0.058174606 = sum of:
        0.058174606 = product of:
          0.11634921 = sum of:
            0.11634921 = weight(_text_:studies in 5647) [ClassicSimilarity], result of:
              0.11634921 = score(doc=5647,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.70538753 = fieldWeight in 5647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.125 = fieldNorm(doc=5647)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Social studies of science. 5(1975), S.423-441
  11. Pejtersen, A.M.: Design of a computer-aided user-system dialogue based on an analysis of users' search behaviour (1984) 0.01
    0.0062329937 = product of:
      0.043630954 = sum of:
        0.043630954 = product of:
          0.08726191 = sum of:
            0.08726191 = weight(_text_:studies in 1044) [ClassicSimilarity], result of:
              0.08726191 = score(doc=1044,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.52904063 = fieldWeight in 1044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1044)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Social science information studies. 4(1984), S.167-183
  12. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.005755476 = product of:
      0.040288333 = sum of:
        0.040288333 = sum of:
          0.029087303 = weight(_text_:studies in 1858) [ClassicSimilarity], result of:
            0.029087303 = score(doc=1858,freq=8.0), product of:
              0.16494368 = queryWeight, product of:
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.041336425 = queryNorm
              0.17634688 = fieldWeight in 1858, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.9902744 = idf(docFreq=2222, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.01120103 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.01120103 = score(doc=1858,freq=2.0), product of:
              0.14475311 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041336425 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
  13. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.01
    0.005188664 = product of:
      0.036320645 = sum of:
        0.036320645 = weight(_text_:management in 7296) [ClassicSimilarity], result of:
          0.036320645 = score(doc=7296,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  14. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.005188664 = product of:
      0.036320645 = sum of:
        0.036320645 = weight(_text_:management in 5828) [ClassicSimilarity], result of:
          0.036320645 = score(doc=5828,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5828)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 20(1984), S.583-601
  15. Mai, J.-E.: Analysis in indexing : document and domain centered approaches (2005) 0.01
    0.005188664 = product of:
      0.036320645 = sum of:
        0.036320645 = weight(_text_:management in 1024) [ClassicSimilarity], result of:
          0.036320645 = score(doc=1024,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 1024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1024)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 41(2005) no.3, S.599-611
  16. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.00
    0.004447426 = product of:
      0.031131983 = sum of:
        0.031131983 = weight(_text_:management in 2576) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2576,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2576)
      0.14285715 = coord(1/7)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
  17. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.00
    0.004413724 = product of:
      0.030896068 = sum of:
        0.030896068 = weight(_text_:case in 133) [ClassicSimilarity], result of:
          0.030896068 = score(doc=133,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.17000891 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
      0.14285715 = coord(1/7)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  18. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.004000368 = product of:
      0.028002575 = sum of:
        0.028002575 = product of:
          0.05600515 = sum of:
            0.05600515 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.05600515 = score(doc=5835,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 8.2006 13:22:44
  19. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.00
    0.0037061884 = product of:
      0.025943318 = sum of:
        0.025943318 = weight(_text_:management in 2667) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2667,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2667)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 52(2016) no.1, S.5-19
  20. Ornager, S.: View a picture : theoretical image analysis and empirical user studies on indexing and retrieval (1996) 0.00
    0.0036359131 = product of:
      0.02545139 = sum of:
        0.02545139 = product of:
          0.05090278 = sum of:
            0.05090278 = weight(_text_:studies in 904) [ClassicSimilarity], result of:
              0.05090278 = score(doc=904,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.30860704 = fieldWeight in 904, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=904)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)