Search (11 results, page 1 of 1)

  • × year_i:[1990 TO 2000}
  • × author_ss:"Fugmann, R."
  1. Fugmann, R.: Book indexing : the classificatory approach (1994) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 6920) [ClassicSimilarity], result of:
              0.010739701 = score(doc=6920,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 6920, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The contents of scientific and technical handbooks often needs fast, reliable and precise subject access, even if the searcher is not familiar with the terminology of the book and has not read it beforehand. This requires careful and expert subject indexing in a highly specific indexing vocabulary, as well as the presentation of the resulting index in a lucid, conceptually transparent manner in print and on disk. Index users, when looking up a general subject heading, often ignore the necessity of looking up the appertaining hierarchically subordinate, more specific subject headings, too. They are either not made aware of these subject headings or their use is felt to be too cumbersome. A classifies approach to computerized subject indexing is described which resembles Ranganathan's Classified Catalogue. Through a variety of peculiarities it leads the searcher rapidly and easily to all subject headings related to a primarily chosen one, and to the postings under all these headings
    Type
    a
  2. Fugmann, R.: Concluding remarks (1996) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 5388) [ClassicSimilarity], result of:
              0.009471525 = score(doc=5388,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 5388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5388)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Fugmann, R.: ¬The empirical approach in the evaluation of information systems (1999) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 4115) [ClassicSimilarity], result of:
              0.008202582 = score(doc=4115,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 4115, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4115)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The comparative evaluation of different mechanized information systems continues to constitute a controversial topic in the literature. Diametrically differemt opinions, seemingly corroborated through empirical evidence, have been presented since the time of the Cranfield experiments. For literally anything an empirical 'proof' can be submitted provided that suitable examples are selected and methods are chosen. substantial advance in Library and Information Science requires abandoning empiricism. Budd's 'hermeneutic phenomenoloy' seems to constitute a promising substitute
    Type
    a
  4. Fugmann, R.: ¬An interactive classaurus on the PC (1990) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 2222) [ClassicSimilarity], result of:
              0.007654148 = score(doc=2222,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 2222, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Both classification systems and thesauri have their specific strengths and weaknesses. Through properly combining both approaches one can eliminate the latter and largely preserve the strenghts. 'Classauri' which originate in this well-known way are most effective if they are constructed and applied during computer-aided indexing. A special variety of classaurus is described which is characterized by the employment of simple bur highly effective conceptual and technical devices and by the renunciation of attempts to generate the wording of index entries algorithmically
    Type
    a
  5. Fugmann, R.: ¬The complementarity of natural and controlled languages in indexing (1995) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1634,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Fugmann, R.: Bridging the gap between database indexing and book indexing (1997) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1210) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1210,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1210, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Traditionally, database indexing and book indexing have been looked upon as being quite distinct and have been kept apart in textbooks and teaching. The traditional borderline between both variations of indexing, however, should not conceal fundamental commonalities of the two approaches. For example, theausurus construction and usage, quite common in databases, has hardly been encountered in book indexing so far. Database indexing, on the other hand, has hardly made use of subheadings of the syntax-displaying type, quite common in book indexing. Most database users also prefer precombining vocabulary units and reject concept analysis. However, insisting on precombining descriptors in a large database vocabulary may, in the long run, well be destructive to the quality, of indexing and of the searches. A complementary approach is conceivable which provides both precombinations and analyzed subjects, both index language syntax and subheadings, and provides access to an information system via precombinations, without jeopardizing the manageability of the vocabulary. Such an approach causes considerable costs in input because it involves a great deal of intellectual work. On the other hand, much time and costs will be saved in the use of the system. In addition, such an approach would endow an information system with survival power
    Type
    a
  7. Fugmann, R.: Representational predictibility : key to the resolution of several pending issues in indexing and information supply (1994) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 7739) [ClassicSimilarity], result of:
              0.00669738 = score(doc=7739,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 7739, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7739)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The low effectiveness of most current information systems has often been pointed out and deplored. A number of misconceptions and experiments under unrealistic conditions have contributed to the faulty design and evaluation of information systems. The postulate of representational predictibility can help to clarify some of the still pending issues as there are the strenghts and limitations of uncontrolled natural language text in retrieval systems, factors for their evaluation, the reliability, consistency, and exhaustivity of indexing, the postulated 'inverse precision-recall relationship', and the usefulness of syntactical evices. The performance of information systems can be imporved if representational predictibility is aimed at in their design and operational use
    Type
    a
  8. Fugmann, R.: Galileo and the inverse precision/recall relationship : medieval attitudes in modern information science (1994) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 8278) [ClassicSimilarity], result of:
              0.005740611 = score(doc=8278,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 8278, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=8278)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The tight adherence to dogmas, created and advocated by authorities and disseminated through hearsay, constitutes an impediment to the progress badly needed in view of the low effectiveness of the vast majority of our bibliographic information systems. The Italian mathematician and physicist Galileo has become famous not only for his discoveries but also for his being exposed to the rejective and even hostile attitude on the part of his contemporaries when he contradicted several dogmas prevailing at that time. This obstructive attitude can be traced throughout the centuries and manifests itself in the field of modern information science, too. An example is the allegedly necessary, inevitable precision/recall relationship, as most recently postulated again by Lancaster (1994). It is believed to be confirmed by emprical evidence, with other empirical evidence to the contrary being neglected. This case even constitutes an example of the suppression of truth in the interest of upholding a dogma
    Type
    a
  9. Fugmann, R.: Illusory goals in information science research (1992) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 2091) [ClassicSimilarity], result of:
              0.0054123 = score(doc=2091,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 2091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  10. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4781) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4781,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4781)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Fugmann, R.: ¬Die Entlinearisierung und Strukturierung von Texten zur Inhaltserschließung und Wissensrepräsentation (1996) 0.00
    0.0011839407 = product of:
      0.0023678814 = sum of:
        0.0023678814 = product of:
          0.0047357627 = sum of:
            0.0047357627 = weight(_text_:a in 5211) [ClassicSimilarity], result of:
              0.0047357627 = score(doc=5211,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.089176424 = fieldWeight in 5211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a