Search (9 results, page 1 of 1)

  • × year_i:[1990 TO 2000}
  • × theme_ss:"Inhaltsanalyse"
  1. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.05
    0.046197 = product of:
      0.092394 = sum of:
        0.06271934 = weight(_text_:data in 7296) [ClassicSimilarity], result of:
          0.06271934 = score(doc=7296,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 7296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 7296) [ClassicSimilarity], result of:
              0.05934933 = score(doc=7296,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 7296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
    Source
    Information processing and management. 30(1994) no.3, S.379-388
  2. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 5830) [ClassicSimilarity], result of:
          0.04138403 = score(doc=5830,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 5830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.050755743 = score(doc=5830,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  3. Taylor, S.L.: Integrating natural language understanding with document structure analysis (1994) 0.02
    0.016588641 = product of:
      0.066354565 = sum of:
        0.066354565 = product of:
          0.13270913 = sum of:
            0.13270913 = weight(_text_:processing in 1794) [ClassicSimilarity], result of:
              0.13270913 = score(doc=1794,freq=10.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7000747 = fieldWeight in 1794, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1794)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. Develops a prototype of an Intelligent Document Understanding System (IDUS) which employs several technologies: image processing, optical character recognition, document structure analysis and text understanding in a cooperative fashion. Discusses those areas of research during development of IDUS where it is found that the most benefit from the integration of natural language processing and image processing occured: document structure analysis, OCR correction, and text analysis. Discusses 2 applications which are supported by IDUS: text retrieval and automatic generation of hypertext links
  4. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 4236) [ClassicSimilarity], result of:
          0.05173004 = score(doc=4236,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 4236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
      0.25 = coord(1/4)
    
    Abstract
    Presents an overview of some recent developments in the clause-based content analysis of linguistic data. Introduces network analysis of evaluative texts, for the analysis of cognitive maps, and linguistic content analysis. Focuses on the types of substantive inferences afforded by the three approaches
  5. Solomon, P.: Access to fiction for children : a user-based assessment of options and opportunities (1997) 0.01
    0.012802532 = product of:
      0.051210128 = sum of:
        0.051210128 = weight(_text_:data in 5845) [ClassicSimilarity], result of:
          0.051210128 = score(doc=5845,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 5845, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5845)
      0.25 = coord(1/4)
    
    Abstract
    Reports on a study of children's intentions, purposes, search terms, strategies, successes and breakdowns in accessing fiction. Data was gathered using naturalistic methods of persistent, intensive observation and questioning with children in several school library media centres in the USA, including 997 OPAC transactions. Analyzes the data and highlights aspects of the broader context of the system which may help in development of mechanisms for electronic access
  6. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.01
    0.012717713 = product of:
      0.05087085 = sum of:
        0.05087085 = product of:
          0.1017417 = sum of:
            0.1017417 = weight(_text_:processing in 7292) [ClassicSimilarity], result of:
              0.1017417 = score(doc=7292,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.53671354 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Natural language processing and speech technology: Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Ed.: D. Gibbon
  7. From information to knowledge : conceptual and content analysis by computer (1995) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 5392) [ClassicSimilarity], result of:
          0.03657866 = score(doc=5392,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 5392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5392)
      0.25 = coord(1/4)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
  8. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.01
    0.008478476 = product of:
      0.033913903 = sum of:
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 2926) [ClassicSimilarity], result of:
              0.067827806 = score(doc=2926,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 2926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2926)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.00
    0.0047583506 = product of:
      0.019033402 = sum of:
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.038066804 = score(doc=6525,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18