Search (10 results, page 1 of 1)

  • × author_ss:"Endres-Niggemeyer, B."
  1. Endres-Niggemeyer, B.; Schmidt, B.: Knowledge based classification systems : basic issues, a toy system and further prospects (1989) 0.02
    0.016416686 = product of:
      0.073875085 = sum of:
        0.008467626 = weight(_text_:of in 720) [ClassicSimilarity], result of:
          0.008467626 = score(doc=720,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.13821793 = fieldWeight in 720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=720)
        0.06540746 = weight(_text_:systems in 720) [ClassicSimilarity], result of:
          0.06540746 = score(doc=720,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 720, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=720)
      0.22222222 = coord(2/9)
    
    Abstract
    This article propagates expert systems for classification by (1) explaining the conceptual affinity (especially) between faceted classification schemes and frame representations, using a simple example and a toy system for demonstration purposes, (2) reviewing some approaches to classificational knowledge processing, both from Artificial Intelligence and Classification Research or Information Science, in order to prepare the ground for the development of more comprehensive systems: expert systems for classification
  2. Endres-Niggemeyer, B.: Summarising text for intelligent communication : results of the Dagstuhl seminar (1994) 0.01
    0.008606319 = product of:
      0.03872844 = sum of:
        0.014200641 = weight(_text_:of in 8867) [ClassicSimilarity], result of:
          0.014200641 = score(doc=8867,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 8867, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=8867)
        0.0245278 = weight(_text_:systems in 8867) [ClassicSimilarity], result of:
          0.0245278 = score(doc=8867,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 8867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=8867)
      0.22222222 = coord(2/9)
    
    Abstract
    As a result of the transition to full-text storage, multimedia and networking, information systems are becoming more efficient but at the same time more difficult to use, in particular because users are confronted with information volumes that increasingly exceed individual processing capacities. Consequently, there is an increase in the demand for user aids such as summarising techniques. Against this background, the interdisciplinary Dagstuhl Seminar 'Summarising Text for Intelligent Communication' (Dec. 1993) outlined the academic state of the art with regard to summarising (abstracting) and proposed future directions for research and system development. Research is currently shifting its attention from text summarising to summarising states of affairs. Recycling solutions are put forward in order to satisfy short-term needs for summarisation products. In the medium and long term, it is necessary to devise concepts and methods of intelligent summarising which have a better formal and empirical grounding and a more modular organisation
  3. Endres-Niggemeyer, B.: Summarizing information (1998) 0.01
    0.007895015 = product of:
      0.03552757 = sum of:
        0.010999769 = weight(_text_:of in 688) [ClassicSimilarity], result of:
          0.010999769 = score(doc=688,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17955035 = fieldWeight in 688, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
        0.0245278 = weight(_text_:systems in 688) [ClassicSimilarity], result of:
          0.0245278 = score(doc=688,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
      0.22222222 = coord(2/9)
    
    Abstract
    Summarizing is the process of reducing the large information size of something like a novel or a scientific paper to a short summary or abstract comprising only the most essential points. Summarizing is frequent in everyday communication, but it is also a professional skill for journalists and others. Automated summarizing functions are urgently needed by Internet users who wish to avoid being overwhelmed by information. This book presents the state of the art and surveys related research; it deals with everyday and professional summarizing as well as computerized approaches. The author focuses in detail on the cognitive pro-cess involved in summarizing and supports this with a multimedia simulation systems on the accompanying CD-ROM
  4. Endres-Niggemeyer, B.; Jauris-Heipke, S.; Pinsky, S.M.; Ulbricht, U.: Wissen gewinnen durch Wissen : Ontologiebasierte Informationsextraktion (2006) 0.00
    0.003933648 = product of:
      0.03540283 = sum of:
        0.03540283 = weight(_text_:systems in 6016) [ClassicSimilarity], result of:
          0.03540283 = score(doc=6016,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 6016, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6016)
      0.11111111 = coord(1/9)
    
    Abstract
    Die ontologiebasierte Informationsextraktion, über die hier berichtet wird, ist Teil eines Systems zum automatischen Zusammenfassen, das sich am Vorgehen kompetenter Menschen orientiert. Dahinter steht die Annahme, dass Menschen die Ergebnisse eines Systems leichter übernehmen können, wenn sie mit Verfahren erarbeitet worden sind, die sie selbst auch benutzen. Das erste Anwendungsgebiet ist Knochenmarktransplantation (KMT). Im Kern des Systems Summit-BMT (Summarize It in Bone Marrow Transplantation) steht eine Ontologie des Fachgebietes. Sie ist als MySQL-Datenbank realisiert und versorgt menschliche Benutzer und Systemkomponenten mit Wissen. Summit-BMT unterstützt die Frageformulierung mit einem empirisch fundierten Szenario-Interface. Die Retrievalergebnisse werden durch ein Textpassagenretrieval vorselektiert und dann kognitiv fundierten Agenten unterbreitet, die unter Einsatz ihrer Wissensbasis / Ontologie genauer prüfen, ob die Propositionen aus der Benutzerfrage getroffen werden. Die relevanten Textclips aus dem Duelldokument werden in das Szenarioformular eingetragen und mit einem Link zu ihrem Vorkommen im Original präsentiert. In diesem Artikel stehen die Ontologie und ihr Gebrauch zur wissensbasierten Informationsextraktion im Mittelpunkt. Die Ontologiedatenbank hält unterschiedliche Wissenstypen so bereit, dass sie leicht kombiniert werden können: Konzepte, Propositionen und ihre syntaktisch-semantischen Schemata, Unifikatoren, Paraphrasen und Definitionen von Frage-Szenarios. Auf sie stützen sich die Systemagenten, welche von Menschen adaptierte Zusammenfassungsstrategien ausführen. Mängel in anderen Verarbeitungsschritten führen zu Verlusten, aber die eigentliche Qualität der Ergebnisse steht und fällt mit der Qualität der Ontologie. Erste Tests der Extraktionsleistung fallen verblüffend positiv aus.
  5. Endres-Niggemeyer, B.; Neugebauer, E.: Professional summarizing : no cognitive simulation without observation (1998) 0.00
    0.002661118 = product of:
      0.023950063 = sum of:
        0.023950063 = weight(_text_:of in 3243) [ClassicSimilarity], result of:
          0.023950063 = score(doc=3243,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 3243, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3243)
      0.11111111 = coord(1/9)
    
    Abstract
    Develops a cognitive model of expert summarization, using 54 working processes of 6 experts recorded by thinking-alound protocols. It comprises up to 140 working steps. Components of the model are a toolbox of empirically founded strategies, principles of process organization, and interpreted working steps where the interaction of cognitive strategies can be investigated. In the computerized simulation the SimSum (Simulation of Summarizing) system, cognitive strategies are represented by object-oriented agents grouped around dedicated blckboards
    Source
    Journal of the American Society for Information Science. 49(1998) no.6, S.486-506
  6. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.00
    0.0023521183 = product of:
      0.021169065 = sum of:
        0.021169065 = weight(_text_:of in 3549) [ClassicSimilarity], result of:
          0.021169065 = score(doc=3549,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34554482 = fieldWeight in 3549, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
      0.11111111 = coord(1/9)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
    Source
    Information, knowledge, evolution. Proceedings of the 44th FID Congress, Helsinki, 28.8.-1.9.1988. Ed. by S. Koshiala and R. Launo
  7. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    0.0021780923 = product of:
      0.01960283 = sum of:
        0.01960283 = weight(_text_:of in 2930) [ClassicSimilarity], result of:
          0.01960283 = score(doc=2930,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 2930, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.11111111 = coord(1/9)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  8. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.00
    0.0016464829 = product of:
      0.014818345 = sum of:
        0.014818345 = weight(_text_:of in 3343) [ClassicSimilarity], result of:
          0.014818345 = score(doc=3343,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=3343)
      0.11111111 = coord(1/9)
    
  9. Endres-Niggemeyer, B.: ¬An empirical process model of abstracting (1992) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 8834) [ClassicSimilarity], result of:
          0.012701439 = score(doc=8834,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
      0.11111111 = coord(1/9)
    
  10. Sparck Jones, K.; Endres-Niggemeyer, B.: Introduction: automatic summarizing (1995) 0.00
    0.001330559 = product of:
      0.011975031 = sum of:
        0.011975031 = weight(_text_:of in 2931) [ClassicSimilarity], result of:
          0.011975031 = score(doc=2931,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 2931, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2931)
      0.11111111 = coord(1/9)
    
    Abstract
    Automatic summarizing is a research topic whose time has come. The papers illustrate some of the relevant work already under way. Places these papers in their wider context: why research and development on automatic summarizing is timely, what areas of work and ideas it should draw on, how future investigations and experiments can be effectively framed