Search (54 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.03
    0.034122285 = product of:
      0.2047337 = sum of:
        0.2047337 = sum of:
          0.119337276 = weight(_text_:theory in 2387) [ClassicSimilarity], result of:
            0.119337276 = score(doc=2387,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.7351069 = fieldWeight in 2387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.125 = fieldNorm(doc=2387)
          0.08539642 = weight(_text_:29 in 2387) [ClassicSimilarity], result of:
            0.08539642 = score(doc=2387,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.6218451 = fieldWeight in 2387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.125 = fieldNorm(doc=2387)
      0.16666667 = coord(1/6)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.021246407 = product of:
      0.06373922 = sum of:
        0.037292898 = product of:
          0.074585795 = sum of:
            0.074585795 = weight(_text_:theory in 5835) [ClassicSimilarity], result of:
              0.074585795 = score(doc=5835,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.4594418 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
        0.026446318 = product of:
          0.052892637 = sum of:
            0.052892637 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.052892637 = score(doc=5835,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    5. 8.2006 13:22:44
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u, L. Kajberg
  3. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.02
    0.01853285 = product of:
      0.1111971 = sum of:
        0.1111971 = sum of:
          0.07383617 = weight(_text_:theory in 6032) [ClassicSimilarity], result of:
            0.07383617 = score(doc=6032,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.45482418 = fieldWeight in 6032, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6032)
          0.03736093 = weight(_text_:29 in 6032) [ClassicSimilarity], result of:
            0.03736093 = score(doc=6032,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 6032, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6032)
      0.16666667 = coord(1/6)
    
    Abstract
    Theories of aboutness and theories of subject analysis and of related concepts such as topicality are often isolated from each other in the literature of information science (IS) and related disciplines. In IS it is important to consider the nature and meaning of these concepts, which is closely related to theoretical and metatheoretical issues in information retrieval (IR). A theory of IR must specify which concepts should be regarded as synonymous concepts and explain how the meaning of the nonsynonymous concepts should be defined
    Date
    29. 9.2001 14:03:14
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.02
    0.016997125 = product of:
      0.05099137 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 5830) [ClassicSimilarity], result of:
              0.059668638 = score(doc=5830,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
        0.021157054 = product of:
          0.04231411 = sum of:
            0.04231411 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04231411 = score(doc=5830,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    5. 8.2006 13:22:08
  5. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.012309102 = product of:
      0.036927305 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 4471) [ClassicSimilarity], result of:
              0.032023653 = score(doc=4471,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 4471) [ClassicSimilarity], result of:
              0.041830957 = score(doc=4471,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
    Source
    Journal of documentation. 58(2002) no.1, S.6-29
  6. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.012309102 = product of:
      0.036927305 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 3953) [ClassicSimilarity], result of:
              0.032023653 = score(doc=3953,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.5 = coord(1/2)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 3953) [ClassicSimilarity], result of:
              0.041830957 = score(doc=3953,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
    Date
    29. 8.2010 12:40:35
  7. Hjoerland, B.: Subject representation and information seeking : contributions to a theory based on the theory of knowledge (1993) 0.01
    0.012306029 = product of:
      0.07383617 = sum of:
        0.07383617 = product of:
          0.14767234 = sum of:
            0.14767234 = weight(_text_:theory in 7555) [ClassicSimilarity], result of:
              0.14767234 = score(doc=7555,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.90964836 = fieldWeight in 7555, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7555)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  8. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.01
    0.0120253395 = product of:
      0.036076017 = sum of:
        0.018646449 = product of:
          0.037292898 = sum of:
            0.037292898 = weight(_text_:theory in 3618) [ClassicSimilarity], result of:
              0.037292898 = score(doc=3618,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2297209 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 3618) [ClassicSimilarity], result of:
              0.034859132 = score(doc=3618,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
  9. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.01
    0.010219913 = product of:
      0.030659739 = sum of:
        0.018459043 = product of:
          0.036918085 = sum of:
            0.036918085 = weight(_text_:theory in 133) [ClassicSimilarity], result of:
              0.036918085 = score(doc=133,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22741209 = fieldWeight in 133, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
        0.012200696 = product of:
          0.024401393 = sum of:
            0.024401393 = weight(_text_:methods in 133) [ClassicSimilarity], result of:
              0.024401393 = score(doc=133,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.15546899 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  10. Wilson, P.: Subjects and the sense of position (1985) 0.01
    0.010219913 = product of:
      0.030659739 = sum of:
        0.018459043 = product of:
          0.036918085 = sum of:
            0.036918085 = weight(_text_:theory in 3648) [ClassicSimilarity], result of:
              0.036918085 = score(doc=3648,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22741209 = fieldWeight in 3648, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3648)
          0.5 = coord(1/2)
        0.012200696 = product of:
          0.024401393 = sum of:
            0.024401393 = weight(_text_:methods in 3648) [ClassicSimilarity], result of:
              0.024401393 = score(doc=3648,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.15546899 = fieldWeight in 3648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3648)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    One knows one is in the presence of "theory" when fundamental questions of a "why" nature are asked. Too often it happens that those involved in the design of bibliographic information systems have no time for brooding. It is thus noteworthy when someone appears an the bibliographic scene who troubles to address, and pursue with philosophic rigor, fundamental questions about the way we organize information. Such a person is Patrick Wilson, formerly philosophy professor at the University of California, Los Angeles, and since 1965, an the faculty of the School of Library and Information Studies, University of California, Berkeley. Bibliographic control is the central concept of Wilson's book Two Kinds of Power. It is represented as a kind of power-a power over knowledge. That power is of two kinds: descriptive and exploitive. Descriptive power is the power to retrieve all writings that satisfy some "evaluatively neutral" description, for instance, all writings by Hobbes or all writings an the subject of eternat recurrence. Descriptive power is achieved insofar as the items in our bibliographic universe are fitted with descriptions and these descriptions are syndetically related. Exploitive power is a less-familiar concept, but it is more important since it can be used to explain why we attempt to order our bibliographic universe in the first place. Exploitive power is the power to obtain the best textual means to an end. Unlike the concept of descriptive power, that of exploitive power has a normative aspect to it. Someone possessing such power would understand the goal of all bibliographic activity; that is, he would understand the diversity of user purposes and the relativity of what is valuable; he would be omniscient both as a bibliographer and as a psychologist. Since exploitive power is ever out of reach, descriptive power is used as a substitute or approximation for it. How adequate this approximation is is the subject of Wilson's book. The particular chapter excerpted in this volume deals with the adequacy of subject access methods. Cutter's statement that one of the objects of a library catalog is to show what the library has an a given subject is generally accepted, as though it were obvious what "being an a given subject" means. It is far from obvious. Wilson challenges the underlying presumption that for any document a heading can be found that is coextensive with its subject. This presumption implies that there is such a thing as the (singular) subject of a document and that it can be identified. But, as Wilson Shows in his elaborate explication, the notion of "subject" is essentially indeterminate, with the consequence that we are limited in our attempts to achieve either descriptive or exploitive power.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  11. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.01
    0.009944773 = product of:
      0.059668638 = sum of:
        0.059668638 = product of:
          0.119337276 = sum of:
            0.119337276 = weight(_text_:theory in 1346) [ClassicSimilarity], result of:
              0.119337276 = score(doc=1346,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.7351069 = fieldWeight in 1346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.125 = fieldNorm(doc=1346)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  12. Hutchins, J.: Summarization: some problems and methods (1987) 0.01
    0.009295769 = product of:
      0.05577461 = sum of:
        0.05577461 = product of:
          0.11154922 = sum of:
            0.11154922 = weight(_text_:methods in 2738) [ClassicSimilarity], result of:
              0.11154922 = score(doc=2738,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.71071535 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.125 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  13. ISO 5963: Methods for examining documents, determining their subjects and selecting indexing terms (1983) 0.01
    0.009295769 = product of:
      0.05577461 = sum of:
        0.05577461 = product of:
          0.11154922 = sum of:
            0.11154922 = weight(_text_:methods in 3991) [ClassicSimilarity], result of:
              0.11154922 = score(doc=3991,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.71071535 = fieldWeight in 3991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.125 = fieldNorm(doc=3991)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  14. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.01
    0.00879002 = product of:
      0.05274012 = sum of:
        0.05274012 = product of:
          0.10548024 = sum of:
            0.10548024 = weight(_text_:theory in 1590) [ClassicSimilarity], result of:
              0.10548024 = score(doc=1590,freq=16.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.6497488 = fieldWeight in 1590, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  15. Hays, D.G.: Linguistic foundations of the theory of content analysis (1969) 0.01
    0.0087016765 = product of:
      0.052210055 = sum of:
        0.052210055 = product of:
          0.10442011 = sum of:
            0.10442011 = weight(_text_:theory in 118) [ClassicSimilarity], result of:
              0.10442011 = score(doc=118,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.6432185 = fieldWeight in 118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.109375 = fieldNorm(doc=118)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  16. Bednarek, M.: Intellectual access to pictorial information (1993) 0.01
    0.007794739 = product of:
      0.046768434 = sum of:
        0.046768434 = product of:
          0.09353687 = sum of:
            0.09353687 = weight(_text_:methods in 5631) [ClassicSimilarity], result of:
              0.09353687 = score(doc=5631,freq=10.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.595953 = fieldWeight in 5631, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5631)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  17. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.01
    0.0075480486 = product of:
      0.04528829 = sum of:
        0.04528829 = product of:
          0.09057658 = sum of:
            0.09057658 = weight(_text_:29 in 6362) [ClassicSimilarity], result of:
              0.09057658 = score(doc=6362,freq=4.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.6595664 = fieldWeight in 6362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.5, S.339-361
  18. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.01
    0.0075480486 = product of:
      0.04528829 = sum of:
        0.04528829 = product of:
          0.09057658 = sum of:
            0.09057658 = weight(_text_:29 in 822) [ClassicSimilarity], result of:
              0.09057658 = score(doc=822,freq=4.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.6595664 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
  19. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.01
    0.0074585797 = product of:
      0.044751476 = sum of:
        0.044751476 = product of:
          0.08950295 = sum of:
            0.08950295 = weight(_text_:theory in 1329) [ClassicSimilarity], result of:
              0.08950295 = score(doc=1329,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.55133015 = fieldWeight in 1329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  20. Schlapfer, K.: ¬The information content of images (1995) 0.01
    0.0069718263 = product of:
      0.041830957 = sum of:
        0.041830957 = product of:
          0.083661914 = sum of:
            0.083661914 = weight(_text_:methods in 521) [ClassicSimilarity], result of:
              0.083661914 = score(doc=521,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.53303653 = fieldWeight in 521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.09375 = fieldNorm(doc=521)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Reviews the methods of calculating the information content of images, with particular reference to the information content of printed and photographic images; and printed and television images

Languages

  • e 52
  • d 2
  • More… Less…

Types

  • a 47
  • m 4
  • d 1
  • el 1
  • n 1
  • s 1
  • More… Less…