Search (45 results, page 3 of 3)

  • × author_ss:"Fugmann, R."
  1. Fugmann, R.: Galileo and the inverse precision/recall relationship : medieval attitudes in modern information science (1994) 0.00
    3.4178712E-4 = product of:
      0.0051268064 = sum of:
        0.0051268064 = product of:
          0.010253613 = sum of:
            0.010253613 = weight(_text_:information in 8278) [ClassicSimilarity], result of:
              0.010253613 = score(doc=8278,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.20156369 = fieldWeight in 8278, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=8278)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The tight adherence to dogmas, created and advocated by authorities and disseminated through hearsay, constitutes an impediment to the progress badly needed in view of the low effectiveness of the vast majority of our bibliographic information systems. The Italian mathematician and physicist Galileo has become famous not only for his discoveries but also for his being exposed to the rejective and even hostile attitude on the part of his contemporaries when he contradicted several dogmas prevailing at that time. This obstructive attitude can be traced throughout the centuries and manifests itself in the field of modern information science, too. An example is the allegedly necessary, inevitable precision/recall relationship, as most recently postulated again by Lancaster (1994). It is believed to be confirmed by emprical evidence, with other empirical evidence to the contrary being neglected. This case even constitutes an example of the suppression of truth in the interest of upholding a dogma
  2. Fugmann, R.: ¬The complementarity of natural and index language in the field of information supply : an overview of their specific capabilities and limitations (2002) 0.00
    2.848226E-4 = product of:
      0.004272339 = sum of:
        0.004272339 = product of:
          0.008544678 = sum of:
            0.008544678 = weight(_text_:information in 1412) [ClassicSimilarity], result of:
              0.008544678 = score(doc=1412,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16796975 = fieldWeight in 1412, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1412)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Natural text phrasing is an indeterminate process and, thus, inherently lacks representational predictability. This holds true in particular in the Gase of general concepts and of their syntactical connectivity. Hence, natural language query phrasing and searching is an unending adventure of trial and error and, in most Gases, has an unsatisfactory outcome with respect to the recall and precision ratlos of the responses. Human indexing is based an knowledgeable document interpretation and aims - among other things - at introducing predictability into the representation of documents. Due to the indeterminacy of natural language text phrasing and image construction, any adequate indexing is also indeterminate in nature and therefore inherently defies any satisfactory algorithmization. But human indexing suffers from a different Set of deficiencies which are absent in the processing of non-interpreted natural language. An optimally effective information System combines both types of language in such a manner that their specific strengths are preserved and their weaknesses are avoided. lf the goal is a large and enduring information system for more than merely known-item searches, the expenditure for an advanced index language and its knowledgeable and careful employment is unavoidable.
  3. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 4781) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=4781,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 4781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4781)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Contemporary research in information science has concentrated on the development of methods for the algorithmic processing of natural language texts. Often, the equivalence of this approach to the intellectual technique of content analysis and indexing is claimed. It is, however, disregarded that contemporary intellectual techniques are far from exploiting their full capabilities. This is largely due to the omission of vocabulary categorisation. It is demonstrated how categorisation can drastically improve the quality of indexing and classification, and, hence, of retrieval
  4. Fugmann, R.: Bridging the gap between database indexing and book indexing (1997) 0.00
    2.3255666E-4 = product of:
      0.0034883497 = sum of:
        0.0034883497 = product of:
          0.0069766995 = sum of:
            0.0069766995 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
              0.0069766995 = score(doc=1210,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13714671 = fieldWeight in 1210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Traditionally, database indexing and book indexing have been looked upon as being quite distinct and have been kept apart in textbooks and teaching. The traditional borderline between both variations of indexing, however, should not conceal fundamental commonalities of the two approaches. For example, theausurus construction and usage, quite common in databases, has hardly been encountered in book indexing so far. Database indexing, on the other hand, has hardly made use of subheadings of the syntax-displaying type, quite common in book indexing. Most database users also prefer precombining vocabulary units and reject concept analysis. However, insisting on precombining descriptors in a large database vocabulary may, in the long run, well be destructive to the quality, of indexing and of the searches. A complementary approach is conceivable which provides both precombinations and analyzed subjects, both index language syntax and subheadings, and provides access to an information system via precombinations, without jeopardizing the manageability of the vocabulary. Such an approach causes considerable costs in input because it involves a great deal of intellectual work. On the other hand, much time and costs will be saved in the use of the system. In addition, such an approach would endow an information system with survival power
  5. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.00
    1.8604532E-4 = product of:
      0.0027906797 = sum of:
        0.0027906797 = product of:
          0.0055813594 = sum of:
            0.0055813594 = weight(_text_:information in 3641) [ClassicSimilarity], result of:
              0.0055813594 = score(doc=3641,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.10971737 = fieldWeight in 3641, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3641)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.