Search (148 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.03
    0.028866513 = product of:
      0.043299768 = sum of:
        0.012255435 = weight(_text_:a in 6362) [ClassicSimilarity], result of:
          0.012255435 = score(doc=6362,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.26478532 = fieldWeight in 6362, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6362)
        0.031044332 = product of:
          0.093132995 = sum of:
            0.093132995 = weight(_text_:29 in 6362) [ClassicSimilarity], result of:
              0.093132995 = score(doc=6362,freq=4.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.6595664 = fieldWeight in 6362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.5, S.339-361
    Type
    a
  2. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.03
    0.025802074 = product of:
      0.03870311 = sum of:
        0.009434237 = weight(_text_:a in 2387) [ClassicSimilarity], result of:
          0.009434237 = score(doc=2387,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.20383182 = fieldWeight in 2387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2387)
        0.029268874 = product of:
          0.08780662 = sum of:
            0.08780662 = weight(_text_:29 in 2387) [ClassicSimilarity], result of:
              0.08780662 = score(doc=2387,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.6218451 = fieldWeight in 2387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2387)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168
    Type
    a
  3. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.03
    0.02541334 = product of:
      0.03812001 = sum of:
        0.0070756786 = weight(_text_:a in 822) [ClassicSimilarity], result of:
          0.0070756786 = score(doc=822,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.15287387 = fieldWeight in 822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=822)
        0.031044332 = product of:
          0.093132995 = sum of:
            0.093132995 = weight(_text_:29 in 822) [ClassicSimilarity], result of:
              0.093132995 = score(doc=822,freq=4.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.6595664 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=822)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
    Type
    a
  4. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.017644837 = product of:
      0.026467256 = sum of:
        0.008338767 = weight(_text_:a in 5835) [ClassicSimilarity], result of:
          0.008338767 = score(doc=5835,freq=4.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.18016359 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.018128488 = product of:
          0.054385465 = sum of:
            0.054385465 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.054385465 = score(doc=5835,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    5. 8.2006 13:22:44
    Type
    a
  5. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.015958019 = product of:
      0.023937028 = sum of:
        0.009434237 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
          0.009434237 = score(doc=5830,freq=8.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.20383182 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.01450279 = product of:
          0.04350837 = sum of:
            0.04350837 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04350837 = score(doc=5830,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Type
    a
  6. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.02
    0.015816946 = product of:
      0.023725417 = sum of:
        0.010920283 = weight(_text_:a in 354) [ClassicSimilarity], result of:
          0.010920283 = score(doc=354,freq=14.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.23593865 = fieldWeight in 354, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=354)
        0.012805132 = product of:
          0.038415395 = sum of:
            0.038415395 = weight(_text_:29 in 354) [ClassicSimilarity], result of:
              0.038415395 = score(doc=354,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper suggests a theoretical basis for identifying and classifying the kinds of subjects a picture may have, using previously developed principles of cataloging and classification, and concepts taken from the philosophy of art, from meaning in language, and from visual perception. The purpose of developing this theoretical basis is to provide the reader with a means for evaluating, adapting, and applying presently existing indexing languages, or for devising new languages for pictorial materials; this paper does not attempt to invent or prescribe a particular indexing language.
    Date
    7. 1.2007 13:00:29
    Type
    a
  7. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.01
    0.01404006 = product of:
      0.02106009 = sum of:
        0.008254958 = weight(_text_:a in 5187) [ClassicSimilarity], result of:
          0.008254958 = score(doc=5187,freq=8.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.17835285 = fieldWeight in 5187, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5187)
        0.012805132 = product of:
          0.038415395 = sum of:
            0.038415395 = weight(_text_:29 in 5187) [ClassicSimilarity], result of:
              0.038415395 = score(doc=5187,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
    Type
    a
  8. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.01
    0.013302757 = product of:
      0.019954136 = sum of:
        0.0071490034 = weight(_text_:a in 6032) [ClassicSimilarity], result of:
          0.0071490034 = score(doc=6032,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.1544581 = fieldWeight in 6032, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6032)
        0.012805132 = product of:
          0.038415395 = sum of:
            0.038415395 = weight(_text_:29 in 6032) [ClassicSimilarity], result of:
              0.038415395 = score(doc=6032,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 6032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6032)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Theories of aboutness and theories of subject analysis and of related concepts such as topicality are often isolated from each other in the literature of information science (IS) and related disciplines. In IS it is important to consider the nature and meaning of these concepts, which is closely related to theoretical and metatheoretical issues in information retrieval (IR). A theory of IR must specify which concepts should be regarded as synonymous concepts and explain how the meaning of the nonsynonymous concepts should be defined
    Date
    29. 9.2001 14:03:14
    Type
    a
  9. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.01
    0.013302757 = product of:
      0.019954136 = sum of:
        0.0071490034 = weight(_text_:a in 5497) [ClassicSimilarity], result of:
          0.0071490034 = score(doc=5497,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.1544581 = fieldWeight in 5497, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5497)
        0.012805132 = product of:
          0.038415395 = sum of:
            0.038415395 = weight(_text_:29 in 5497) [ClassicSimilarity], result of:
              0.038415395 = score(doc=5497,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27205724 = fieldWeight in 5497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5497)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The principle of specificity for subject headings provides a clear advantage to many researchers for the precision it brings to subject searching. However, for some researchers very specific subject headings hinder an efficient and comprehensive search. An appropriate broader heading, especially when made narrower in scope by the addition of subheadings, can benefit researchers by providing generic access to their topic. Assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users. However, it can be difficult for catalogers to assign broader terms consistently to different works and without consistency the gathering function of those terms may not be realized.
    Date
    30. 7.2006 14:29:04
    Type
    a
  10. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.013139317 = product of:
      0.019708974 = sum of:
        0.005140361 = weight(_text_:a in 1858) [ClassicSimilarity], result of:
          0.005140361 = score(doc=1858,freq=38.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.11106029 = fieldWeight in 1858, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.014568614 = product of:
          0.02185292 = sum of:
            0.010975827 = weight(_text_:29 in 1858) [ClassicSimilarity], result of:
              0.010975827 = score(doc=1858,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.07773064 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
            0.010877092 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.010877092 = score(doc=1858,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.6666667 = coord(2/3)
      0.6666667 = coord(2/3)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
  11. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.013028663 = product of:
      0.019542994 = sum of:
        0.008665901 = weight(_text_:a in 5589) [ClassicSimilarity], result of:
          0.008665901 = score(doc=5589,freq=12.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.18723148 = fieldWeight in 5589, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.010877093 = product of:
          0.032631278 = sum of:
            0.032631278 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.032631278 = score(doc=5589,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
    Type
    a
  12. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.012813273 = product of:
      0.019219909 = sum of:
        0.0047171186 = weight(_text_:a in 251) [ClassicSimilarity], result of:
          0.0047171186 = score(doc=251,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.10191591 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.01450279 = product of:
          0.04350837 = sum of:
            0.04350837 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.04350837 = score(doc=251,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  13. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.012591119 = product of:
      0.018886678 = sum of:
        0.0079108495 = weight(_text_:a in 4471) [ClassicSimilarity], result of:
          0.0079108495 = score(doc=4471,freq=10.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.1709182 = fieldWeight in 4471, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4471)
        0.010975828 = product of:
          0.032927483 = sum of:
            0.032927483 = weight(_text_:29 in 4471) [ClassicSimilarity], result of:
              0.032927483 = score(doc=4471,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23319192 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
    Source
    Journal of documentation. 58(2002) no.1, S.6-29
    Type
    a
  14. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.012525296 = product of:
      0.018787943 = sum of:
        0.0079108495 = weight(_text_:a in 1416) [ClassicSimilarity], result of:
          0.0079108495 = score(doc=1416,freq=10.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.1709182 = fieldWeight in 1416, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.010877093 = product of:
          0.032631278 = sum of:
            0.032631278 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.032631278 = score(doc=1416,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  15. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.01
    0.011994082 = product of:
      0.017991122 = sum of:
        0.008844598 = weight(_text_:a in 2657) [ClassicSimilarity], result of:
          0.008844598 = score(doc=2657,freq=18.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.19109234 = fieldWeight in 2657, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2657)
        0.009146524 = product of:
          0.02743957 = sum of:
            0.02743957 = weight(_text_:29 in 2657) [ClassicSimilarity], result of:
              0.02743957 = score(doc=2657,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.19432661 = fieldWeight in 2657, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2657)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
    Date
    29. 8.2004 16:20:55
    Type
    a
  16. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.010511316 = product of:
      0.015766975 = sum of:
        0.0029481992 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
          0.0029481992 = score(doc=4888,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.06369744 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.012818776 = product of:
          0.03845633 = sum of:
            0.03845633 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.03845633 = score(doc=4888,freq=4.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
    Type
    a
  17. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.009675778 = product of:
      0.014513668 = sum of:
        0.0035378393 = weight(_text_:a in 3953) [ClassicSimilarity], result of:
          0.0035378393 = score(doc=3953,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.07643694 = fieldWeight in 3953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.010975828 = product of:
          0.032927483 = sum of:
            0.032927483 = weight(_text_:29 in 3953) [ClassicSimilarity], result of:
              0.032927483 = score(doc=3953,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23319192 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    29. 8.2010 12:40:35
    Type
    a
  18. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.009609955 = product of:
      0.014414933 = sum of:
        0.0035378393 = weight(_text_:a in 6525) [ClassicSimilarity], result of:
          0.0035378393 = score(doc=6525,freq=2.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.07643694 = fieldWeight in 6525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.010877093 = product of:
          0.032631278 = sum of:
            0.032631278 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.032631278 = score(doc=6525,freq=2.0), product of:
                0.14056681 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040140964 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
    Type
    a
  19. Hoover, L.: ¬A beginners' guide for subject analysis of theses and dissertations in the hard sciences (2005) 0.01
    0.00950197 = product of:
      0.014252955 = sum of:
        0.0051064314 = weight(_text_:a in 5740) [ClassicSimilarity], result of:
          0.0051064314 = score(doc=5740,freq=6.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.11032722 = fieldWeight in 5740, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5740)
        0.009146524 = product of:
          0.02743957 = sum of:
            0.02743957 = weight(_text_:29 in 5740) [ClassicSimilarity], result of:
              0.02743957 = score(doc=5740,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.19432661 = fieldWeight in 5740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5740)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This guide, for beginning catalogers with humanities or social sciences backgrounds, provides assistance in subject analysis (based on Library of Congress Subject Headings) of theses and dissertations (T/Ds) that are produced by graduate students in university departments in the hard sciences (physical sciences and engineering). It is aimed at those who have had little or no experience in cataloging, especially of this type of material, and for those who desire to supplement local mentoring resources for subject analysis in the hard sciences. Theses and dissertations from these departments present a special challenge because they are the results of current research representing specific new concepts with which the cataloger may not be familiar. In fact, subject headings often have not yet been created for the specific concept(s) being researched. Additionally, T/D authors often use jargon/terminology specific to their department. Catalogers often have many other duties in addition to subject analysis of T/Ds in the hard sciences, yet they desire to provide optimal access through accurate, thorough subject analysis. Tips are provided for determining the content of the T/D, strategic searches on WorldCat for possible subject headings, evaluating the relevancy of these subject headings for final selection, and selecting appropriate subdivisions where needed. Lists of basic reference resources are also provided.
    Date
    29. 9.2008 19:08:38
    Type
    a
  20. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.01
    0.0093254885 = product of:
      0.013988232 = sum of:
        0.006671014 = weight(_text_:a in 2653) [ClassicSimilarity], result of:
          0.006671014 = score(doc=2653,freq=16.0), product of:
            0.04628442 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040140964 = queryNorm
            0.14413087 = fieldWeight in 2653, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2653)
        0.0073172185 = product of:
          0.021951655 = sum of:
            0.021951655 = weight(_text_:29 in 2653) [ClassicSimilarity], result of:
              0.021951655 = score(doc=2653,freq=2.0), product of:
                0.14120336 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.040140964 = queryNorm
                0.15546128 = fieldWeight in 2653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2653)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper demonstrates that indexing is a complex phenomenon and presents a domain centered approach to indexing. The indexing process is analysed using the Means-Ends Analysis, a tool developed for the Cognitive Work Analysis framework. A Means-Ends Analysis of indexing provides a holistic understanding of indexing and Shows the importance of understanding the users' activities when indexing. The paper presents a domain-centered approach to indexing that includes an analysis of the users' activities and the paper outlines that approach to indexing.
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
    Date
    29. 8.2004 15:13:08
    Type
    a

Authors

Languages

  • e 131
  • d 15
  • f 1
  • nl 1
  • More… Less…

Types

  • a 139
  • m 5
  • el 3
  • x 2
  • d 1
  • s 1
  • More… Less…