Search (31 results, page 2 of 2)

  • × author_ss:"Fugmann, R."
  1. Fugmann, R.: Illusory goals in information science research (1992) 0.00
    0.0033653039 = product of:
      0.013461215 = sum of:
        0.013461215 = weight(_text_:information in 2091) [ClassicSimilarity], result of:
          0.013461215 = score(doc=2091,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21943474 = fieldWeight in 2091, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2091)
      0.25 = coord(1/4)
    
    Abstract
    The human's expressing general concepts in uncontrolled natural language, his information need, and meaning recognition in and essence selection from texts are indeterminate processes and therefore defy any satisfactory formalization and programming. Where the equivalence or even superiority of algorithmic approaches to these golas has been claimed, the authors have worked under artificial, experimental conditions and/or have in their evaluation referred to those approaches that are far from exploiting the capabilities of intellectual content analysis, representation and query phrasing
  2. Fugmann, R.: Galileo and the inverse precision/recall relationship : medieval attitudes in modern information science (1994) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 8278) [ClassicSimilarity], result of:
          0.012364916 = score(doc=8278,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 8278, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=8278)
      0.25 = coord(1/4)
    
    Abstract
    The tight adherence to dogmas, created and advocated by authorities and disseminated through hearsay, constitutes an impediment to the progress badly needed in view of the low effectiveness of the vast majority of our bibliographic information systems. The Italian mathematician and physicist Galileo has become famous not only for his discoveries but also for his being exposed to the rejective and even hostile attitude on the part of his contemporaries when he contradicted several dogmas prevailing at that time. This obstructive attitude can be traced throughout the centuries and manifests itself in the field of modern information science, too. An example is the allegedly necessary, inevitable precision/recall relationship, as most recently postulated again by Lancaster (1994). It is believed to be confirmed by emprical evidence, with other empirical evidence to the contrary being neglected. This case even constitutes an example of the suppression of truth in the interest of upholding a dogma
  3. Fugmann, R.: ¬The complementarity of natural and index language in the field of information supply : an overview of their specific capabilities and limitations (2002) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 1412) [ClassicSimilarity], result of:
          0.010304097 = score(doc=1412,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 1412, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1412)
      0.25 = coord(1/4)
    
    Abstract
    Natural text phrasing is an indeterminate process and, thus, inherently lacks representational predictability. This holds true in particular in the Gase of general concepts and of their syntactical connectivity. Hence, natural language query phrasing and searching is an unending adventure of trial and error and, in most Gases, has an unsatisfactory outcome with respect to the recall and precision ratlos of the responses. Human indexing is based an knowledgeable document interpretation and aims - among other things - at introducing predictability into the representation of documents. Due to the indeterminacy of natural language text phrasing and image construction, any adequate indexing is also indeterminate in nature and therefore inherently defies any satisfactory algorithmization. But human indexing suffers from a different Set of deficiencies which are absent in the processing of non-interpreted natural language. An optimally effective information System combines both types of language in such a manner that their specific strengths are preserved and their weaknesses are avoided. lf the goal is a large and enduring information system for more than merely known-item searches, the expenditure for an advanced index language and its knowledgeable and careful employment is unavoidable.
  4. Fugmann, R.: Unusual possibilities in indexing and classification (1990) 0.00
    0.002379629 = product of:
      0.009518516 = sum of:
        0.009518516 = weight(_text_:information in 4781) [ClassicSimilarity], result of:
          0.009518516 = score(doc=4781,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1551638 = fieldWeight in 4781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4781)
      0.25 = coord(1/4)
    
    Abstract
    Contemporary research in information science has concentrated on the development of methods for the algorithmic processing of natural language texts. Often, the equivalence of this approach to the intellectual technique of content analysis and indexing is claimed. It is, however, disregarded that contemporary intellectual techniques are far from exploiting their full capabilities. This is largely due to the omission of vocabulary categorisation. It is demonstrated how categorisation can drastically improve the quality of indexing and classification, and, hence, of retrieval
  5. Fugmann, R.: Über die Möglichkeiten und Grenzen der programmierten Informationsbereitstellung (2016) 0.00
    0.002379629 = product of:
      0.009518516 = sum of:
        0.009518516 = weight(_text_:information in 3227) [ClassicSimilarity], result of:
          0.009518516 = score(doc=3227,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1551638 = fieldWeight in 3227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3227)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 67(2016) H.2/3, S.105-116
  6. Fugmann, R.: Bridging the gap between database indexing and book indexing (1997) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 1210) [ClassicSimilarity], result of:
          0.008413259 = score(doc=1210,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 1210, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1210)
      0.25 = coord(1/4)
    
    Abstract
    Traditionally, database indexing and book indexing have been looked upon as being quite distinct and have been kept apart in textbooks and teaching. The traditional borderline between both variations of indexing, however, should not conceal fundamental commonalities of the two approaches. For example, theausurus construction and usage, quite common in databases, has hardly been encountered in book indexing so far. Database indexing, on the other hand, has hardly made use of subheadings of the syntax-displaying type, quite common in book indexing. Most database users also prefer precombining vocabulary units and reject concept analysis. However, insisting on precombining descriptors in a large database vocabulary may, in the long run, well be destructive to the quality, of indexing and of the searches. A complementary approach is conceivable which provides both precombinations and analyzed subjects, both index language syntax and subheadings, and provides access to an information system via precombinations, without jeopardizing the manageability of the vocabulary. Such an approach causes considerable costs in input because it involves a great deal of intellectual work. On the other hand, much time and costs will be saved in the use of the system. In addition, such an approach would endow an information system with survival power
  7. Fugmann, R.: Obstacles to progress in mechanized subject access and the necessity of a paradigm change (2000) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 1182) [ClassicSimilarity], result of:
          0.008413259 = score(doc=1182,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1182)
      0.25 = coord(1/4)
    
    Abstract
    Contemporary information systems, both the private and the commercially available ones, have often been blamed for their low effectiveness in terms of precision and recall, especially when they have reached considerable size with respect to file volume and use frequency (see, for example, Belkin, 1980; Blair, 1996, p.19; Desai, 1997; Drabenstott, 1996; Knorz, 1998). Saracevic (1989), after having reviewed the contemporary design of online subject access, calls "for radically different design principles and implementation" (p. 107). Van Rijsbergen (1990) writes: "The keywords approach with statistical techniques has reached its theoretical limit and further attempts for improvement are considered a waste of time" (p. 111). Lancaster (1992) deplores that very little really significant literature an subject indexing has been published in the last thirty or so years. In her preface to the Proceedings of the Sixth International Study Conference an Classification Research in 1997, Mcllwaine (1997) writes, "many were surprised to find that the problems with which they wrestle today are not greatly different from those that have been occupying the minds of specialists in the field for over a generation, and probably a great deal longer" (p. v).
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  8. Fugmann, R.: ¬Das Faule Ei des Kolumbus im Aslib-Cranfield Vergleich von Informationssystemen : Die erneute Betrachtung eines einflussreichen Experiments (2004) 0.00
    0.0020821756 = product of:
      0.008328702 = sum of:
        0.008328702 = weight(_text_:information in 2364) [ClassicSimilarity], result of:
          0.008328702 = score(doc=2364,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13576832 = fieldWeight in 2364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2364)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 55(2004) H.4, S.211-220
  9. Fugmann, R.: ¬The complementarity of natural and indexing languages (1985) 0.00
    0.0016826519 = product of:
      0.0067306077 = sum of:
        0.0067306077 = weight(_text_:information in 3641) [ClassicSimilarity], result of:
          0.0067306077 = score(doc=3641,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.10971737 = fieldWeight in 3641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3641)
      0.25 = coord(1/4)
    
    Abstract
    The second Cranfield experiment (Cranfield II) in the mid-1960s challenged assumptions held by librarians for nearly a century, namely, that the objective of providing subject access was to bring together all materials an a given topic and that the achieving of this objective required vocabulary control in the form of an index language. The results of Cranfield II were replicated by other retrieval experiments quick to follow its lead and increasing support was given to the opinion that natural language information systems could perform at least as effectively, and certainly more economically, than those employing index languages. When the results of empirical research dramatically counter conventional wisdom, an obvious course is to question the validity of the research and, in the case of retrieval experiments, this eventually happened. Retrieval experiments were criticized for their artificiality, their unrepresentative sampies, and their problematic definitions-particularly the definition of relevance. In the minds of some, at least, the relative merits of natural languages vs. indexing languages continued to be an unresolved issue. As with many eitherlor options, a seemingly safe course to follow is to opt for "both," and indeed there seems to be an increasing amount of counsel advising a combination of natural language and index language search capabilities. One strong voice offering such counsel is that of Robert Fugmann, a chemist by training, a theoretician by predilection, and, currently, a practicing information scientist at Hoechst AG, Frankfurt/Main. This selection from his writings sheds light an the capabilities and limitations of both kinds of indexing. Its special significance lies in the fact that its arguments are based not an empirical but an rational grounds. Fugmann's major argument starts from the observation that in natural language there are essentially two different kinds of concepts: 1) individual concepts, repre sented by names of individual things (e.g., the name of the town Augsburg), and 2) general concepts represented by names of classes of things (e.g., pesticides). Individual concepts can be represented in language simply and succinctly, often by a single string of alphanumeric characters; general concepts, an the other hand, can be expressed in a multiplicity of ways. The word pesticides refers to the concept of pesticides, but also referring to this concept are numerous circumlocutions, such as "Substance X was effective against pests." Because natural language is capable of infinite variety, we cannot predict a priori the manifold ways a general concept, like pesticides, will be represented by any given author. It is this lack of predictability that limits natural language retrieval and causes poor precision and recall. Thus, the essential and defining characteristic of an index language ls that it is a tool for representational predictability.
  10. Fugmann, R.: ¬Das Faule Ei des Kolumbus in der Informationsbereitstellung (2004) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 2309) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=2309,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 2309, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2309)
      0.25 = coord(1/4)
    
    Content
    "Ein Memorandum wider den Zeitgeist auf diesem Gebiet gerichtet an all Diejenigen, - welche über die Gestaltung von Informationsdiensten zu entscheiden haben oder - denen eine Argumentationshilfe für den Widerstand gegen den Druck von automatisierten Billigsystemen willkommen ist. Das Aufsuchen und die Wiederverwertung von Erfahrungen und Wissen ist lebensnotwendig für jeden Menschen und ist Vorbedingung für das Prosperieren einer jeglichen Tätigkeit eigener oder gemeinschaftlicher Art und sogar für deren gesicherten Fortbestand. Schon seit Jahrhunderten sind die Bibliotheken Dienstleister mit dieser Aufgabenstellung gewesen. Ein großer Fundus an Wissen und Erfahrungen ist auf diesem Gebiet bereits erarbeitet worden. In der Neuzeit haben die computerisierten Datenbanken große Fortschritte bei der Erfassung und beim Wiederfinden von wertvoller Information ermöglicht. Auf der Suche nach Information zu einem bestimmten Thema formuliert man Wörter oder Wortstämme, von denen man im Voraus weiß oder vermutet, dass sie in den gewünschten Texten auftreten. Ein solcher Ansatz erscheint wegen seiner Automatisierbarkeit und wegen seiner relativ geringen Kosten manch einem Neuling als das Ei des Kolumbus, insbesondere auch deswegen, weil hier die aufwändige Vorbereitung der Texte für die Einspeicherung wegfällt.
    Diese Suchstrategie versagt im Fall von Entdeckungsrecherchen (question of discovery), dann also, wenn man sich auf der Suche nach Unbekanntem befindet, wie es auf den Gebieten von Forschung und Entwicklung in der Praxis der Regelfall ist. Das Gesuchte kann auf unbegrenzt vielfältige Weise von den Autoren einschlägiger Texte ausgedrückt worden sein. Es entzieht sich damit der Textwörtersuche, denn man kann nicht unbegrenzt viele Textwörter und Kombinationen von ihnen zur Suchbedingung machen. Den größten Schaden richtet eine solche Suchstrategie dort an, wo sie auch für die Dienste eines hausinternen Intranet eingesetzt wird, dort also, wo es vorrangig auf hochgradige Vollständigkeit der Suchergebnisse ankommt und wo man sich nicht allein auf die Erinnerung an Verfassernamen oder an Ort- und Zeitdaten von Dokumenten stützen kann und darf. Mangelt es an Erfahrung oder an Weitblick, dann stellen sich die Unzulänglichkeiten der Textwörtersuche erst dann heraus, wenn man einige Zeit mit derselben praktisch gearbeitet hat. Dann wird dem Anwender klar, dass es sich bei all dem, worauf er so sehr vertraut hat, und was in der Erinnerungsrecherche auch meistens gut funktioniert, in Wirklichkeit um ein faules Kolumbus-Ei gehandelt hat, um ein Produkt von trügerisch positivem Anschein also, jedoch mit versteckten, erst spät in Erscheinung tretenden Mängeln. Die immensen "Kooperationsschwierigkeiten', welche heutzutage zwischen Anbietern und Anwendern bestehen, dürften großenteils auf die unerfüllbaren Versprechungen von unseriösen Anbietern zurückzuführen sein oder auf die Illusionen von Forschungsgruppen, welche sich im Zustand einer geradezu skandalösen Ignoranz auf dem Informationsgebiet bewegen, mögen sie auch die InformationsTechnologie brilliant beherrschen. Nicht nur ist der Schaden bei dem getäuschten und enttäuschten Anwender groß, sondern es ist auch die ganze Branche der professionellen Informationsexperten gefährdet. Den Anwendern werden verführerisch billige automatisierte Techniken zum Kauf angeboten, bei denen vermeintlich auf die sachverständige Mitwirkung des Informationsexperten verzichtet werden kann. Dass die Brauchbarkeit dieser Produkte auf den Typ der Erinnerungsrecherche beschränkt ist, wird verkannt, verdrängt oder von der Werbung bewusst verschwiegen. Eine effektive und wettbewerbsfähige Arbeit auf jeglichem Gebiet kann es nur dort geben, wo sich auch das Management des (nicht quantifizierbaren) Nutzens von treffsicher und prompt bereitgestellter Information bewusst ist und hierfür auch zu investieren bereit ist, dies nicht nur in Computertechnologie, sondern auch in sachkundiges und geschultes Personal. Bei Fortdauer dieser Entwicklung werden immer mehr Informationssuchende im Zustand eines fortgesetzt wachsenden Informationsdefizits zu arbeiten gezwungen sein, sehr zu ihrem Schaden und zum Schaden der Gemeinschaft, in welcher sie sich befinden. Dies wäre durch die bessere Nutzung des Wissens und der Erfahrungen aus der traditionellen Informationsbereitstellung vermeidbar."
    Source
    Information - Wissenschaft und Praxis. 55(2004) H.2, S.72
  11. Fugmann, R.: ¬Das Buchregister : Methodische Grundlagen und praktische Anwendung (2006) 0.00
    7.4363407E-4 = product of:
      0.0029745363 = sum of:
        0.0029745363 = weight(_text_:information in 665) [ClassicSimilarity], result of:
          0.0029745363 = score(doc=665,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.048488684 = fieldWeight in 665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=665)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 58(2007) H.3, S.186 (J. Bertram): "Bei aller Kritik: Das Buch gemahnt eindrücklich daran, dass ein gutes Register sowohl den Anforderungen einer Entdeckungsrecherche als auch denen einer Erinnerungsrecherche dienlich sein sollte, um es mit Fugmanns Worten auszudrücken. Heißt: Es soll nicht nur denen zugute kommen, die das Buch gelesen haben, sondern eben gerade auch denjenigen, die es nicht gelesen haben. Nicht nur erinnerte Wortlaute sollen mit ihm wieder auffindbar sein, vielmehr soll das Register auch schnelle Antworten auf die Frage ermöglichen, ob ein Buch zum interessierenden Thema Ergiebiges enthält. Und dass für das zweite Anliegen ein Aufwand zu betreiben ist, der über ein reines Stichwortregister hinausgeht, wird von Fugmann anschaulich und überzeugend dargestellt. Auch seinem Wunsch, dass bei der Rezension vor Büchern Vorhandensein und Qualität des Registers (stärkere) Berücksichtigung finden mögen, ist uneingeschränkt zuzustimmen. Dass das vor ihm produzierte Register des Guten zu viel tut, steht auf einem anderen Blatt."