Search (8 results, page 1 of 1)

  • × author_ss:"Fugmann, R."
  1. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.01
    0.014161265 = product of:
      0.070806324 = sum of:
        0.070806324 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
          0.070806324 = score(doc=1085,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.38690117 = fieldWeight in 1085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1085)
      0.2 = coord(1/5)
    
    Date
    18. 8.2022 19:22:57
  2. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.01
    0.010929298 = product of:
      0.05464649 = sum of:
        0.05464649 = weight(_text_:it in 4781) [ClassicSimilarity], result of:
          0.05464649 = score(doc=4781,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.36153275 = fieldWeight in 4781, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=4781)
      0.2 = coord(1/5)
    
    Abstract
    Contemporary research in information science has concentrated on the development of methods for the algorithmic processing of natural language texts. Often, the equivalence of this approach to the intellectual technique of content analysis and indexing is claimed. It is, however, disregarded that contemporary intellectual techniques are far from exploiting their full capabilities. This is largely due to the omission of vocabulary categorisation. It is demonstrated how categorisation can drastically improve the quality of indexing and classification, and, hence, of retrieval
  3. Fugmann, R.: Book indexing : the classificatory approach (1994) 0.01
    0.008196974 = product of:
      0.04098487 = sum of:
        0.04098487 = weight(_text_:it in 6920) [ClassicSimilarity], result of:
          0.04098487 = score(doc=6920,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 6920, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=6920)
      0.2 = coord(1/5)
    
    Abstract
    The contents of scientific and technical handbooks often needs fast, reliable and precise subject access, even if the searcher is not familiar with the terminology of the book and has not read it beforehand. This requires careful and expert subject indexing in a highly specific indexing vocabulary, as well as the presentation of the resulting index in a lucid, conceptually transparent manner in print and on disk. Index users, when looking up a general subject heading, often ignore the necessity of looking up the appertaining hierarchically subordinate, more specific subject headings, too. They are either not made aware of these subject headings or their use is felt to be too cumbersome. A classifies approach to computerized subject indexing is described which resembles Ranganathan's Classified Catalogue. Through a variety of peculiarities it leads the searcher rapidly and easily to all subject headings related to a primarily chosen one, and to the postings under all these headings
  4. Fugmann, R.: Obstacles to progress in mechanized subject access and the necessity of a paradigm change (2000) 0.01
    0.0070806327 = product of:
      0.035403162 = sum of:
        0.035403162 = weight(_text_:22 in 1182) [ClassicSimilarity], result of:
          0.035403162 = score(doc=1182,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 1182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1182)
      0.2 = coord(1/5)
    
    Date
    22. 9.1997 19:16:05
  5. Fugmann, R.: Galileo and the inverse precision/recall relationship : medieval attitudes in modern information science (1994) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 8278) [ClassicSimilarity], result of:
          0.028980678 = score(doc=8278,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 8278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=8278)
      0.2 = coord(1/5)
    
    Abstract
    The tight adherence to dogmas, created and advocated by authorities and disseminated through hearsay, constitutes an impediment to the progress badly needed in view of the low effectiveness of the vast majority of our bibliographic information systems. The Italian mathematician and physicist Galileo has become famous not only for his discoveries but also for his being exposed to the rejective and even hostile attitude on the part of his contemporaries when he contradicted several dogmas prevailing at that time. This obstructive attitude can be traced throughout the centuries and manifests itself in the field of modern information science, too. An example is the allegedly necessary, inevitable precision/recall relationship, as most recently postulated again by Lancaster (1994). It is believed to be confirmed by emprical evidence, with other empirical evidence to the contrary being neglected. This case even constitutes an example of the suppression of truth in the interest of upholding a dogma
  6. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.01
    0.005464649 = product of:
      0.027323244 = sum of:
        0.027323244 = weight(_text_:it in 3641) [ClassicSimilarity], result of:
          0.027323244 = score(doc=3641,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.18076637 = fieldWeight in 3641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
      0.2 = coord(1/5)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
  7. Fugmann, R.: Bridging the gap between database indexing and book indexing (1997) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 1210) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1210,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.2 = coord(1/5)
    
    Abstract
    Traditionally, database indexing and book indexing have been looked upon as being quite distinct and have been kept apart in textbooks and teaching. The traditional borderline between both variations of indexing, however, should not conceal fundamental commonalities of the two approaches. For example, theausurus construction and usage, quite common in databases, has hardly been encountered in book indexing so far. Database indexing, on the other hand, has hardly made use of subheadings of the syntax-displaying type, quite common in book indexing. Most database users also prefer precombining vocabulary units and reject concept analysis. However, insisting on precombining descriptors in a large database vocabulary may, in the long run, well be destructive to the quality, of indexing and of the searches. A complementary approach is conceivable which provides both precombinations and analyzed subjects, both index language syntax and subheadings, and provides access to an information system via precombinations, without jeopardizing the manageability of the vocabulary. Such an approach causes considerable costs in input because it involves a great deal of intellectual work. On the other hand, much time and costs will be saved in the use of the system. In addition, such an approach would endow an information system with survival power
  8. Fugmann, R.: ¬Das Buchregister : Methodische Grundlagen und praktische Anwendung (2006) 0.00
    0.0034154055 = product of:
      0.017077027 = sum of:
        0.017077027 = weight(_text_:it in 665) [ClassicSimilarity], result of:
          0.017077027 = score(doc=665,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.11297898 = fieldWeight in 665, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.01953125 = fieldNorm(doc=665)
      0.2 = coord(1/5)
    
    Footnote
    Weitere Rez. in: Knowledge organization 34(2007) no.1, S.60-61 (I. Dahlberg): "... In conclusion, I would like to congratulate Robert Fugmann for having written this book in such a short time and - may I disclose it too? - just before his 80th birthday. And I know it is his desire to show that by improving the science of book indexing in the way demonstrated in this book, the readers will be offered an instrument for making book indexing and index reading a joy for the index readers and an adventure in an enticing world thus entered into via an intelligent presentation of knowledge!"

Years

Languages

Types