Search (31 results, page 2 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Bednarek, M.: Intellectual access to pictorial information (1993) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 5631) [ClassicSimilarity], result of:
              0.03267146 = score(doc=5631,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 5631, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5631)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  2. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 2203) [ClassicSimilarity], result of:
              0.03267146 = score(doc=2203,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 2203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
  3. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 473) [ClassicSimilarity], result of:
              0.03267146 = score(doc=473,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=473)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this study descriptions of a novel by 100 users at 2 Swedish public libraries, Malmö and Molndal, Mar-Apr 95, were compared to the index terms used for the novels at these libraries. Describes previous systems for fiction indexing, the 2 libraries, and the users interviewed. Compares the AMP system with their own model. The latter operates with terms under the headings phenomena, frame and author's intention. The similarities between the users' and indexers' descriptions were sufficiently close to make it possible to retrieve fiction in accordance with users' wishes in Molndal, and would have been in Malmö, had more books been indexed with more terms. Sometimes the similarities were close enough for users to retrieve fiction on their own
  4. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 4471) [ClassicSimilarity], result of:
              0.03267146 = score(doc=4471,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
  5. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 3953) [ClassicSimilarity], result of:
              0.03267146 = score(doc=3953,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
  6. Buckland, M.K.: Obsolescence in subject description (2012) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 299) [ClassicSimilarity], result of:
              0.03267146 = score(doc=299,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The paper aims to explain the character and causes of obsolescence in assigned subject descriptors. Design/methodology/approach - The paper takes the form of a conceptual analysis with examples and reference to existing literature. Findings - Subject description comes in two forms: assigning the name or code of a subject to a document and assigning a document to a named subject category. Each method associates a document with the name of a subject. This naming activity is the site of tensions between the procedural need of information systems for stable records and the inherent multiplicity and instability of linguistic expressions. As languages change, previously assigned subject descriptions become obsolescent. The issues, tensions, and compromises involved are introduced. Originality/value - Drawing on the work of Robert Fairthorne and others, an explanation of the unavoidable obsolescence of assigned subject headings is presented. The discussion relates to libraries, but the same issues arise in any context in which subject description is expected to remain useful for an extended period of time.
  7. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.01
    0.0068065543 = product of:
      0.013613109 = sum of:
        0.013613109 = product of:
          0.027226217 = sum of:
            0.027226217 = weight(_text_:systems in 3618) [ClassicSimilarity], result of:
              0.027226217 = score(doc=3618,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.1697705 = fieldWeight in 3618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
  8. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.0053026653 = product of:
      0.010605331 = sum of:
        0.010605331 = product of:
          0.021210661 = sum of:
            0.021210661 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.021210661 = score(doc=2293,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
  9. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.00
    0.004764588 = product of:
      0.009529176 = sum of:
        0.009529176 = product of:
          0.019058352 = sum of:
            0.019058352 = weight(_text_:systems in 133) [ClassicSimilarity], result of:
              0.019058352 = score(doc=133,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.118839346 = fieldWeight in 133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  10. Wilson, P.: Subjects and the sense of position (1985) 0.00
    0.004764588 = product of:
      0.009529176 = sum of:
        0.009529176 = product of:
          0.019058352 = sum of:
            0.019058352 = weight(_text_:systems in 3648) [ClassicSimilarity], result of:
              0.019058352 = score(doc=3648,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.118839346 = fieldWeight in 3648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3648)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    One knows one is in the presence of "theory" when fundamental questions of a "why" nature are asked. Too often it happens that those involved in the design of bibliographic information systems have no time for brooding. It is thus noteworthy when someone appears an the bibliographic scene who troubles to address, and pursue with philosophic rigor, fundamental questions about the way we organize information. Such a person is Patrick Wilson, formerly philosophy professor at the University of California, Los Angeles, and since 1965, an the faculty of the School of Library and Information Studies, University of California, Berkeley. Bibliographic control is the central concept of Wilson's book Two Kinds of Power. It is represented as a kind of power-a power over knowledge. That power is of two kinds: descriptive and exploitive. Descriptive power is the power to retrieve all writings that satisfy some "evaluatively neutral" description, for instance, all writings by Hobbes or all writings an the subject of eternat recurrence. Descriptive power is achieved insofar as the items in our bibliographic universe are fitted with descriptions and these descriptions are syndetically related. Exploitive power is a less-familiar concept, but it is more important since it can be used to explain why we attempt to order our bibliographic universe in the first place. Exploitive power is the power to obtain the best textual means to an end. Unlike the concept of descriptive power, that of exploitive power has a normative aspect to it. Someone possessing such power would understand the goal of all bibliographic activity; that is, he would understand the diversity of user purposes and the relativity of what is valuable; he would be omniscient both as a bibliographer and as a psychologist. Since exploitive power is ever out of reach, descriptive power is used as a substitute or approximation for it. How adequate this approximation is is the subject of Wilson's book. The particular chapter excerpted in this volume deals with the adequacy of subject access methods. Cutter's statement that one of the objects of a library catalog is to show what the library has an a given subject is generally accepted, as though it were obvious what "being an a given subject" means. It is far from obvious. Wilson challenges the underlying presumption that for any document a heading can be found that is coextensive with its subject. This presumption implies that there is such a thing as the (singular) subject of a document and that it can be identified. But, as Wilson Shows in his elaborate explication, the notion of "subject" is essentially indeterminate, with the consequence that we are limited in our attempts to achieve either descriptive or exploitive power.
  11. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0035351103 = product of:
      0.0070702205 = sum of:
        0.0070702205 = product of:
          0.014140441 = sum of:
            0.014140441 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.014140441 = score(doc=1858,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05