Search (51 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  • × year_i:[1990 TO 2000}
  1. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 7292) [ClassicSimilarity], result of:
          0.04530031 = score(doc=7292,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 7292) [ClassicSimilarity], result of:
              0.008461362 = score(doc=7292,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Type
    a
  2. Nohr, H.: Inhaltsanalyse (1999) 0.02
    0.016040254 = product of:
      0.03208051 = sum of:
        0.030200208 = weight(_text_:von in 3430) [ClassicSimilarity], result of:
          0.030200208 = score(doc=3430,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.23581557 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=3430)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 3430) [ClassicSimilarity], result of:
              0.005640907 = score(doc=3430,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 3430, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3430)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die Inhaltsanalyse ist der elementare Teilprozeß der Indexierung von Dokumenten. Trotz dieser zentralen Stellung im Rahmen einer inhaltlichen Dokumenterschließung wird der Vorgang der Inhaltsanalyse in theorie und Praxis noch zu wenig beachtet. Der Grund dieser Vernachlässigung liegt im vermeintlich subjektiven Charakter des Verstehensprozesses. Zur Überwindung dieses Problems wird zunächst der genaue Gegenstand der Inhaltsanalyse bestimmt. Daraus abgeleitet lassen sich methodisch weiterführende Ansätze und Verfahren einer inhaltlichen Analyse gewinnen. Abschließend werden einige weitere Aufgaben der Inhaltsanalyse, wir z.B. eine qualitative Bewertung, behandelt
    Type
    a
  3. Langridge, D.W.: Inhaltsanalyse: Grundlagen und Methoden (1994) 0.01
    0.01155861 = product of:
      0.04623444 = sum of:
        0.04623444 = weight(_text_:von in 3923) [ClassicSimilarity], result of:
          0.04623444 = score(doc=3923,freq=12.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.3610174 = fieldWeight in 3923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.25 = coord(1/4)
    
    Abstract
    Nahezu unbeachtet in der gesamten bibliothekswissenschaftlichen Literatur blieb bislang ein Arbeitsschritt der Inhaltserschließung, obgleich er ihren Ausgangspunkt bildet und damit von grundlegender Bedeutung ist: die Analyse des Inhalts von Dokumenten. Klare Aussagen über den Inhalt und die Beschaffenheit eines Dokuments auf der Basis fundierter Kriterien treffen zu können, ist das Anliegen von Langridges, nunmehr erstmalig in deutscher Übersetzung vorliegendem Werk. Gestützt auf ein Fundament philosophisch abgeleiteter Wissensformen, zeigt Langridge anhand einer Vielzahl konkreter Beispiele Wege - aber auch Irrwege- auf, sich dem Inhalt von Dokumenten zu nähern und zu Resultaten zu gelangen, die dank objektiver Kriterien überprüfbar werden. Er wendet sich damit sowohl an Studenten dieses Fachgebiets, indem er die grundlegenden Strukturen von Wissen aufzeigt, als auch an erfahrene Praktiker, die ihre Arbeit effektiver gestalten und ihre Entscheidungen auf eine Basis stellen möchten, die persönliche Einschätzungen objektiviert.
    Issue
    Übers. von U. Reimer-Böhner.
  4. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.0105517935 = product of:
      0.042207174 = sum of:
        0.042207174 = product of:
          0.06331076 = sum of:
            0.011281814 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
              0.011281814 = score(doc=5830,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 5830, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
            0.052028947 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.052028947 = score(doc=5830,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Type
    a
  5. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.007208732 = product of:
      0.028834928 = sum of:
        0.028834928 = product of:
          0.04325239 = sum of:
            0.004230681 = weight(_text_:a in 6525) [ClassicSimilarity], result of:
              0.004230681 = score(doc=6525,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.07643694 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
            0.039021708 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.039021708 = score(doc=6525,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
    Type
    a
  6. Farrow, J.: Indexing as a cognitive process (1994) 0.00
    0.0013295747 = product of:
      0.005318299 = sum of:
        0.005318299 = product of:
          0.015954897 = sum of:
            0.015954897 = weight(_text_:a in 1257) [ClassicSimilarity], result of:
              0.015954897 = score(doc=1257,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.28826174 = fieldWeight in 1257, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=1257)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  7. Laffal, J.: ¬A concept analysis of Jonathan Swift's 'Tale of a tub' and 'Gulliver's travels' (1995) 0.00
    0.0012212924 = product of:
      0.0048851697 = sum of:
        0.0048851697 = product of:
          0.014655508 = sum of:
            0.014655508 = weight(_text_:a in 6362) [ClassicSimilarity], result of:
              0.014655508 = score(doc=6362,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.26478532 = fieldWeight in 6362, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6362)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  8. Renouf, A.: Making sense of text : automated approaches to meaning extraction (1993) 0.00
    0.0011633779 = product of:
      0.0046535116 = sum of:
        0.0046535116 = product of:
          0.013960535 = sum of:
            0.013960535 = weight(_text_:a in 7111) [ClassicSimilarity], result of:
              0.013960535 = score(doc=7111,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.25222903 = fieldWeight in 7111, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7111)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  9. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.00
    0.0010511212 = product of:
      0.0042044846 = sum of:
        0.0042044846 = product of:
          0.012613453 = sum of:
            0.012613453 = weight(_text_:a in 2926) [ClassicSimilarity], result of:
              0.012613453 = score(doc=2926,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22789092 = fieldWeight in 2926, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2926)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
    Type
    a
  10. Nahl-Jakobovits, D.; Jakobovits, L.A.: ¬A content analysis method for developing user-based objectives (1992) 0.00
    0.0010177437 = product of:
      0.004070975 = sum of:
        0.004070975 = product of:
          0.012212924 = sum of:
            0.012212924 = weight(_text_:a in 3015) [ClassicSimilarity], result of:
              0.012212924 = score(doc=3015,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22065444 = fieldWeight in 3015, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3015)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The article explains content analysis, a method whereby statements taken from oral or written library user comments are labeled as particular speech acts. These speech acts are then categorized into the three behavioral domains: affective, cognitive, and sonsorimotor, ansd used to construct user-based instructional objectives
    Type
    a
  11. Svenonius, E.; McGarry, D.: Objectivity in evaluating subject heading assignment (1993) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 5612) [ClassicSimilarity], result of:
              0.012090176 = score(doc=5612,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 5612, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5612)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Recent papers have called attention to discrepancies in the assignment of LCSH. While philosophical arguments can be made that subject analysis, if not a logical impossibility, at least is point-of-view dependent, subject headings continue to be assigned and continue to be useful. The hypothesis advanced in the present project is that to a considerable degree there is a clear-cut right and wrong to LCSH subject heading assignment. To test the hypothesis, it was postulated that the assignment of a subject heading is correct if it is supported by textual warrant (at least 20% of the book being cataloged is on the topic) and is constructed in accordance with the LoC Subject Cataloging Manual: Subject Headings. A sample of 100 books on scientific subjects was used to test the hypothesis
    Type
    a
  12. Ornager, S.: View a picture : theoretical image analysis and empirical user studies on indexing and retrieval (1996) 0.00
    0.0010075148 = product of:
      0.004030059 = sum of:
        0.004030059 = product of:
          0.012090176 = sum of:
            0.012090176 = weight(_text_:a in 904) [ClassicSimilarity], result of:
              0.012090176 = score(doc=904,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.21843673 = fieldWeight in 904, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=904)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Examines Panofsky's and Barthes's theories of image analysis and reports on a study of criteria for analysis and indexing of images and the different types of user queries used in 15 Danish newspaper image archives. A structured interview method and observation and various categories for subject analysis were used. The results identify a list of the minimum number of elements and led to user typology of 5 categories. The requirement for retrieval may involve combining images in a more visual way with text-based image retrieval
    Type
    a
  13. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
              0.011966172 = score(doc=2203,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 2203, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Type
    a
  14. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 7510) [ClassicSimilarity], result of:
              0.011281814 = score(doc=7510,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 7510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7510)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  15. Wellisch, H.H.: Aboutness and selection of topics (1996) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 6150) [ClassicSimilarity], result of:
              0.011281814 = score(doc=6150,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 6150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6150)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  16. Klüver, J.; Kier, R.: Rekonstruktion und Verstehen : ein Computer-Programm zur Interpretation sozialwissenschaftlicher Texte (1994) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 6830) [ClassicSimilarity], result of:
              0.011281814 = score(doc=6830,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 6830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6830)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  17. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.00
    9.327775E-4 = product of:
      0.00373111 = sum of:
        0.00373111 = product of:
          0.0111933295 = sum of:
            0.0111933295 = weight(_text_:a in 2576) [ClassicSimilarity], result of:
              0.0111933295 = score(doc=2576,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20223314 = fieldWeight in 2576, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2576)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
    Type
    a
  18. Taylor, S.L.: Integrating natural language understanding with document structure analysis (1994) 0.00
    9.19731E-4 = product of:
      0.003678924 = sum of:
        0.003678924 = product of:
          0.011036771 = sum of:
            0.011036771 = weight(_text_:a in 1794) [ClassicSimilarity], result of:
              0.011036771 = score(doc=1794,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19940455 = fieldWeight in 1794, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1794)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. Develops a prototype of an Intelligent Document Understanding System (IDUS) which employs several technologies: image processing, optical character recognition, document structure analysis and text understanding in a cooperative fashion. Discusses those areas of research during development of IDUS where it is found that the most benefit from the integration of natural language processing and image processing occured: document structure analysis, OCR correction, and text analysis. Discusses 2 applications which are supported by IDUS: text retrieval and automatic generation of hypertext links
    Type
    a
  19. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.00
    8.309842E-4 = product of:
      0.0033239368 = sum of:
        0.0033239368 = product of:
          0.0099718105 = sum of:
            0.0099718105 = weight(_text_:a in 6320) [ClassicSimilarity], result of:
              0.0099718105 = score(doc=6320,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.18016359 = fieldWeight in 6320, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6320)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Footnote
    Übers. d. Titels: Analyse model for a classification of iconographic documents
    Type
    a
  20. Hjoerland, B.: Subject representation and information seeking : contributions to a theory based on the theory of knowledge (1993) 0.00
    8.2263234E-4 = product of:
      0.0032905294 = sum of:
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 7555) [ClassicSimilarity], result of:
              0.009871588 = score(doc=7555,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 7555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7555)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    

Languages

  • e 44
  • d 6
  • f 1
  • More… Less…

Types

  • a 47
  • m 2
  • d 1
  • s 1
  • More… Less…