Search (35 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.02
    0.018505525 = product of:
      0.0740221 = sum of:
        0.0740221 = product of:
          0.11103315 = sum of:
            0.070791304 = weight(_text_:language in 354) [ClassicSimilarity], result of:
              0.070791304 = score(doc=354,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.42911017 = fieldWeight in 354, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
            0.040241845 = weight(_text_:29 in 354) [ClassicSimilarity], result of:
              0.040241845 = score(doc=354,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper suggests a theoretical basis for identifying and classifying the kinds of subjects a picture may have, using previously developed principles of cataloging and classification, and concepts taken from the philosophy of art, from meaning in language, and from visual perception. The purpose of developing this theoretical basis is to provide the reader with a means for evaluating, adapting, and applying presently existing indexing languages, or for devising new languages for pictorial materials; this paper does not attempt to invent or prescribe a particular indexing language.
    Date
    7. 1.2007 13:00:29
  2. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.02
    0.015141658 = product of:
      0.06056663 = sum of:
        0.06056663 = product of:
          0.09084994 = sum of:
            0.050565217 = weight(_text_:language in 4888) [ClassicSimilarity], result of:
              0.050565217 = score(doc=4888,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30650726 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
            0.040284727 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.040284727 = score(doc=4888,freq=4.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  3. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.009298478 = product of:
      0.037193913 = sum of:
        0.037193913 = sum of:
          0.014302002 = weight(_text_:language in 1858) [ClassicSimilarity], result of:
            0.014302002 = score(doc=1858,freq=2.0), product of:
              0.16497234 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.042049456 = queryNorm
              0.08669334 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.011497671 = weight(_text_:29 in 1858) [ClassicSimilarity], result of:
            0.011497671 = score(doc=1858,freq=2.0), product of:
              0.14791684 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.042049456 = queryNorm
              0.07773064 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.011394241 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.011394241 = score(doc=1858,freq=2.0), product of:
              0.14725003 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042049456 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
  4. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.01
    0.008130082 = product of:
      0.032520328 = sum of:
        0.032520328 = product of:
          0.09756098 = sum of:
            0.09756098 = weight(_text_:29 in 6362) [ClassicSimilarity], result of:
              0.09756098 = score(doc=6362,freq=4.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.6595664 = fieldWeight in 6362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.5, S.339-361
  5. Martindale, C.; McKenzie, D.: On the utility of content analysis in author attribution : 'The federalist' (1995) 0.01
    0.008130082 = product of:
      0.032520328 = sum of:
        0.032520328 = product of:
          0.09756098 = sum of:
            0.09756098 = weight(_text_:29 in 822) [ClassicSimilarity], result of:
              0.09756098 = score(doc=822,freq=4.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.6595664 = fieldWeight in 822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=822)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    8. 3.1997 10:05:29
    Source
    Computers and the humanities. 29(1995) no.4, S.259-270
  6. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.01
    0.007665114 = product of:
      0.030660456 = sum of:
        0.030660456 = product of:
          0.09198137 = sum of:
            0.09198137 = weight(_text_:29 in 2387) [ClassicSimilarity], result of:
              0.09198137 = score(doc=2387,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.6218451 = fieldWeight in 2387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2387)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168
  7. Taylor, S.L.: Integrating natural language understanding with document structure analysis (1994) 0.01
    0.0072251074 = product of:
      0.02890043 = sum of:
        0.02890043 = product of:
          0.08670129 = sum of:
            0.08670129 = weight(_text_:language in 1794) [ClassicSimilarity], result of:
              0.08670129 = score(doc=1794,freq=6.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.5255505 = fieldWeight in 1794, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1794)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. Develops a prototype of an Intelligent Document Understanding System (IDUS) which employs several technologies: image processing, optical character recognition, document structure analysis and text understanding in a cooperative fashion. Discusses those areas of research during development of IDUS where it is found that the most benefit from the integration of natural language processing and image processing occured: document structure analysis, OCR correction, and text analysis. Discusses 2 applications which are supported by IDUS: text retrieval and automatic generation of hypertext links
  8. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.01
    0.007151001 = product of:
      0.028604005 = sum of:
        0.028604005 = product of:
          0.08581201 = sum of:
            0.08581201 = weight(_text_:language in 7292) [ClassicSimilarity], result of:
              0.08581201 = score(doc=7292,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.52016 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Natural language processing and speech technology: Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Ed.: D. Gibbon
  9. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.01
    0.0058992757 = product of:
      0.023597103 = sum of:
        0.023597103 = product of:
          0.070791304 = sum of:
            0.070791304 = weight(_text_:language in 7296) [ClassicSimilarity], result of:
              0.070791304 = score(doc=7296,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.42911017 = fieldWeight in 7296, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
  10. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0047476008 = product of:
      0.018990403 = sum of:
        0.018990403 = product of:
          0.056971207 = sum of:
            0.056971207 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.056971207 = score(doc=5835,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:22:44
  11. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.00
    0.004213768 = product of:
      0.016855072 = sum of:
        0.016855072 = product of:
          0.050565217 = sum of:
            0.050565217 = weight(_text_:language in 1590) [ClassicSimilarity], result of:
              0.050565217 = score(doc=1590,freq=4.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30650726 = fieldWeight in 1590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  12. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.00
    0.0041714176 = product of:
      0.01668567 = sum of:
        0.01668567 = product of:
          0.05005701 = sum of:
            0.05005701 = weight(_text_:language in 2691) [ClassicSimilarity], result of:
              0.05005701 = score(doc=2691,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 2691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
  13. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.00
    0.0041714176 = product of:
      0.01668567 = sum of:
        0.01668567 = product of:
          0.05005701 = sum of:
            0.05005701 = weight(_text_:language in 2021) [ClassicSimilarity], result of:
              0.05005701 = score(doc=2021,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30342668 = fieldWeight in 2021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
  14. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.00
    0.0037980804 = product of:
      0.0151923215 = sum of:
        0.0151923215 = product of:
          0.045576964 = sum of:
            0.045576964 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.045576964 = score(doc=5830,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:22:08
  15. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.00
    0.0037980804 = product of:
      0.0151923215 = sum of:
        0.0151923215 = product of:
          0.045576964 = sum of:
            0.045576964 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.045576964 = score(doc=251,freq=2.0), product of:
                0.14725003 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042049456 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    22. 5.2021 12:43:05
  16. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.00
    0.0035755006 = product of:
      0.014302002 = sum of:
        0.014302002 = product of:
          0.042906005 = sum of:
            0.042906005 = weight(_text_:language in 2576) [ClassicSimilarity], result of:
              0.042906005 = score(doc=2576,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.26008 = fieldWeight in 2576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2576)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
  17. Hjoerland, B.: Towards a theory of aboutness, subject, topicality, theme, domain, field, content ... and relevance (2001) 0.00
    0.0033534872 = product of:
      0.013413949 = sum of:
        0.013413949 = product of:
          0.040241845 = sum of:
            0.040241845 = weight(_text_:29 in 6032) [ClassicSimilarity], result of:
              0.040241845 = score(doc=6032,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 6032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6032)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    29. 9.2001 14:03:14
  18. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.00
    0.0033534872 = product of:
      0.013413949 = sum of:
        0.013413949 = product of:
          0.040241845 = sum of:
            0.040241845 = weight(_text_:29 in 5187) [ClassicSimilarity], result of:
              0.040241845 = score(doc=5187,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  19. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.00
    0.0033534872 = product of:
      0.013413949 = sum of:
        0.013413949 = product of:
          0.040241845 = sum of:
            0.040241845 = weight(_text_:29 in 5497) [ClassicSimilarity], result of:
              0.040241845 = score(doc=5497,freq=2.0), product of:
                0.14791684 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042049456 = queryNorm
                0.27205724 = fieldWeight in 5497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5497)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Date
    30. 7.2006 14:29:04
  20. Weinberg, B.H.: Why indexing fails the researcher (1988) 0.00
    0.002979584 = product of:
      0.011918336 = sum of:
        0.011918336 = product of:
          0.03575501 = sum of:
            0.03575501 = weight(_text_:language in 703) [ClassicSimilarity], result of:
              0.03575501 = score(doc=703,freq=2.0), product of:
                0.16497234 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.042049456 = queryNorm
                0.21673335 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=703)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    It is a truism in information science that indexing is associated with 'aboutness', and that index terms that accurately represent what a document is about will serve the needs of the user/searcher well. It is contended in this paper that indexing which is limited to the representation of aboutness serves the novice in a discipline adequately, but does not serve the scholar or researcher, who is concerned with highly specific aspects of or points-of-view on a subject. The linguistic analogs of 'aboutness' and 'aspects' are 'topic' and 'comment' respectively. Serial indexing services deal with topics at varyng levels of specificity, but neglect comment almost entirely. This may explain the underutilization of secondary information services by scholars, as has been repeatedly demonstrated in user studies. It may also account for the incomplete lists of bibliographic references in many research papers. Natural language searching of fulltext databases does not solve this problem, because the aspect of a topic of interest to researchers is often inexpressible in concrete terms. The thesis is illustrated with examples of indexing failures in research projects the author has conducted on a range of linguistic and library-information science topics. Finally, the question of whether indexing can be improved to meet the needs of researchers is examined

Languages

  • e 32
  • d 3

Types

  • a 31
  • m 4
  • el 1
  • More… Less…