Search (29 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.01
    0.005805583 = product of:
      0.058055833 = sum of:
        0.058055833 = product of:
          0.1741675 = sum of:
            0.1741675 = weight(_text_:1990 in 7510) [ClassicSimilarity], result of:
              0.1741675 = score(doc=7510,freq=5.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                1.2597351 = fieldWeight in 7510, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Library and information science research. 12(1990) no.3, S.251-262
    Year
    1990
  2. Wyllie, J.: Concept indexing : the world beyond the windows (1990) 0.00
    0.0043541878 = product of:
      0.043541875 = sum of:
        0.043541875 = product of:
          0.13062562 = sum of:
            0.13062562 = weight(_text_:1990 in 2977) [ClassicSimilarity], result of:
              0.13062562 = score(doc=2977,freq=5.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.94480133 = fieldWeight in 2977, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2977)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Aslib proceedings. 42(1990), S.153-159
    Year
    1990
  3. Studwell, W.E.: Subject suggestions 6 : some concerns relating to quantity of subjects (1990) 0.00
    0.0029027916 = product of:
      0.029027916 = sum of:
        0.029027916 = product of:
          0.08708375 = sum of:
            0.08708375 = weight(_text_:1990 in 466) [ClassicSimilarity], result of:
              0.08708375 = score(doc=466,freq=5.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.62986755 = fieldWeight in 466, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Cataloging and classification quarterly. 10(1990) no.4, S.99-104
    Year
    1990
  4. Wilkinson, C.L.: Intellectual level as a search enhancement in the online environment : summation and implications (1990) 0.00
    0.0029027916 = product of:
      0.029027916 = sum of:
        0.029027916 = product of:
          0.08708375 = sum of:
            0.08708375 = weight(_text_:1990 in 479) [ClassicSimilarity], result of:
              0.08708375 = score(doc=479,freq=5.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.62986755 = fieldWeight in 479, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=479)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Cataloging and classification quarterly. 11(1990) no.1, S.89-97
    Year
    1990
  5. Mayring, P.: Qualitative Inhaltsanalyse : Grundlagen und Techniken (1990) 0.00
    0.0028106158 = product of:
      0.028106159 = sum of:
        0.028106159 = product of:
          0.084318474 = sum of:
            0.084318474 = weight(_text_:1990 in 34) [ClassicSimilarity], result of:
              0.084318474 = score(doc=34,freq=3.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.6098666 = fieldWeight in 34, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.078125 = fieldNorm(doc=34)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Year
    1990
  6. Knautz, K.; Dröge, E.; Finkelmeyer, S.; Guschauski, D.; Juchem, K.; Krzmyk, C.; Miskovic, D.; Schiefer, J.; Sen, E.; Verbina, J.; Werner, N.; Stock, W.G.: Indexieren von Emotionen bei Videos (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 3637) [ClassicSimilarity], result of:
              0.07356274 = score(doc=3637,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 3637, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3637)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.4, S.221-236
    Year
    2010
  7. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.00
    0.0024520915 = product of:
      0.024520915 = sum of:
        0.024520915 = product of:
          0.07356274 = sum of:
            0.07356274 = weight(_text_:2010 in 3953) [ClassicSimilarity], result of:
              0.07356274 = score(doc=3953,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5013491 = fieldWeight in 3953, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Aslib proceedings. 62(2010) no.2, S.202-209
    Year
    2010
  8. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.00
    0.0024424107 = product of:
      0.024424106 = sum of:
        0.024424106 = product of:
          0.07327232 = sum of:
            0.07327232 = weight(_text_:problem in 1329) [ClassicSimilarity], result of:
              0.07327232 = score(doc=1329,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.5627445 = fieldWeight in 1329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
  9. Yoon, J.W.: Utilizing quantitative users' reactions to represent affective meanings of an image (2010) 0.00
    0.0020434097 = product of:
      0.020434096 = sum of:
        0.020434096 = product of:
          0.06130229 = sum of:
            0.06130229 = weight(_text_:2010 in 3584) [ClassicSimilarity], result of:
              0.06130229 = score(doc=3584,freq=5.0), product of:
                0.14672957 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.03067635 = queryNorm
                0.41779095 = fieldWeight in 3584, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3584)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1345-1359
    Year
    2010
  10. Shaw, R.: Information organization and the philosophy of history (2013) 0.00
    0.0020148864 = product of:
      0.020148862 = sum of:
        0.020148862 = product of:
          0.060446583 = sum of:
            0.060446583 = weight(_text_:problem in 946) [ClassicSimilarity], result of:
              0.060446583 = score(doc=946,freq=4.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.46424055 = fieldWeight in 946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=946)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The philosophy of history can help articulate problems relevant to information organization. One such problem is "aboutness": How do texts relate to the world? In response to this problem, philosophers of history have developed theories of colligation describing how authors bind together phenomena under organizing concepts. Drawing on these ideas, I present a theory of subject analysis that avoids the problematic illusion of an independent "landscape" of subjects. This theory points to a broad vision of the future of information organization and some specific challenges to be met.
  11. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.00
    0.0017626584 = product of:
      0.017626584 = sum of:
        0.017626584 = product of:
          0.052879747 = sum of:
            0.052879747 = weight(_text_:problem in 2069) [ClassicSimilarity], result of:
              0.052879747 = score(doc=2069,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.4061259 = fieldWeight in 2069, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
  12. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.00
    0.0014247395 = product of:
      0.014247394 = sum of:
        0.014247394 = product of:
          0.04274218 = sum of:
            0.04274218 = weight(_text_:problem in 3413) [ClassicSimilarity], result of:
              0.04274218 = score(doc=3413,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3282676 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
  13. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.00
    0.0014247395 = product of:
      0.014247394 = sum of:
        0.014247394 = product of:
          0.04274218 = sum of:
            0.04274218 = weight(_text_:problem in 5828) [ClassicSimilarity], result of:
              0.04274218 = score(doc=5828,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.3282676 = fieldWeight in 5828, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5828)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
  14. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0013854074 = product of:
      0.013854073 = sum of:
        0.013854073 = product of:
          0.04156222 = sum of:
            0.04156222 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.04156222 = score(doc=5835,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    5. 8.2006 13:22:44
  15. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0013682999 = product of:
      0.0068414994 = sum of:
        0.0040706843 = product of:
          0.012212053 = sum of:
            0.012212053 = weight(_text_:problem in 1858) [ClassicSimilarity], result of:
              0.012212053 = score(doc=1858,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.09379075 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.33333334 = coord(1/3)
        0.0027708148 = product of:
          0.008312444 = sum of:
            0.008312444 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.008312444 = score(doc=1858,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
  16. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.00
    0.0012338607 = product of:
      0.012338608 = sum of:
        0.012338608 = product of:
          0.03701582 = sum of:
            0.03701582 = weight(_text_:problem in 133) [ClassicSimilarity], result of:
              0.03701582 = score(doc=133,freq=6.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.28428814 = fieldWeight in 133, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  17. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.00
    0.0012212053 = product of:
      0.012212053 = sum of:
        0.012212053 = product of:
          0.03663616 = sum of:
            0.03663616 = weight(_text_:problem in 213) [ClassicSimilarity], result of:
              0.03663616 = score(doc=213,freq=2.0), product of:
                0.1302053 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03067635 = queryNorm
                0.28137225 = fieldWeight in 213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=213)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Die verfügbare multimediale Information nimmt stetig zu, nicht zuletzt durch die Tag für Tag wachsende Zahl an neuer Information im Internet. Damit man dieser Flut Herr werden und diese Information wieder abrufbar machen kann, muss sie annotiert und geeignet in Datenbanken abgelegt werden. Hier besteht das Problem der manuellen Annotation, das einerseits durch die Ermüdung aufgrund der Routinearbeit und andererseits durch die Subjektivität des Annotierenden zu Fehlern in der Annotation führen kann. Unterstützende Systeme, die dem Dokumentar genau diese Routinearbeit abnehmen, können hier bis zu einem gewissen Grad Abhilfe schaffen. Die wissenschaftliche Erschließung von beispielsweise filmbeiträgen wird der Dokumentar zwar immer noch selbst machen müssen und auch sollen, aber die Erkennung und Dokumentation von sog. Einstellungsgrenzen kann durchaus automatisch mit Unterstützung eines Rechners geschehen. In diesem Beitrag zeigen wir anhand von Projekten, die wir durchgeführt haben, wie weit diese Unterstützung des Dokumentars bei der Annotation von Bildern und Videos gehen kann
  18. Campbell, G.: Queer theory and the creation of contextual subject access tools for gay and lesbian communities (2000) 0.00
    0.0011474291 = product of:
      0.011474291 = sum of:
        0.011474291 = product of:
          0.03442287 = sum of:
            0.03442287 = weight(_text_:1990 in 6054) [ClassicSimilarity], result of:
              0.03442287 = score(doc=6054,freq=2.0), product of:
                0.13825724 = queryWeight, product of:
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.03067635 = queryNorm
                0.248977 = fieldWeight in 6054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.506965 = idf(docFreq=1325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6054)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    Knowledge organization research has come to question the theoretical distinction between "aboutness" (a document's innate content) and "meaning" (the use to which a document is put). This distinction has relevance beyond Information Studies, particularly in relation to homosexual concerns. Literary criticism, in particular, frequently addresses the question: when is a work "about" homosexuality? This paper explores this literary debate and its implications for the design of subject access systems for gay and lesbian communities. By examining the literary criticism of Herman Melville's Billy Budd, particularly in relation to the theories of Eve Kosofsky Sedgwick in The Epistemology of the Closet (1990), this paper exposes three tensions that designers of gay and lesbian classifications and vocabularies can expect to face. First is a tension between essentialist and constructivist views of homosexuality, which will affect the choice of terms, categories, and references. Second is a tension between minoritizing and universalizing perspectives on homosexuality. Third is a redefined distinction between aboutness and meaning, in which aboutness refers not to stable document content, but to the system designer's inescapable social and ideological perspectives. Designers of subject access systems can therefore expect to work in a context of intense scrutiny and persistent controversy
  19. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0011083259 = product of:
      0.011083259 = sum of:
        0.011083259 = product of:
          0.033249777 = sum of:
            0.033249777 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.033249777 = score(doc=5830,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    5. 8.2006 13:22:08
  20. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.00
    0.0011083259 = product of:
      0.011083259 = sum of:
        0.011083259 = product of:
          0.033249777 = sum of:
            0.033249777 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.033249777 = score(doc=251,freq=2.0), product of:
                0.10742335 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03067635 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    22. 5.2021 12:43:05