Search (181 results, page 1 of 10)

  • × type_ss:"el"
  • × year_i:[1990 TO 2000}
  1. Marloth, H.: Thesen über die Beziehungen zwischen Informationspolitik, Informationswissenschaft und Informationspraxis : Saarbrücker Thesen (1996) 0.08
    0.07554612 = product of:
      0.33995757 = sum of:
        0.019883756 = weight(_text_:und in 3275) [ClassicSimilarity], result of:
          0.019883756 = score(doc=3275,freq=4.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.34630734 = fieldWeight in 3275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3275)
        0.10059894 = weight(_text_:informationswissenschaft in 3275) [ClassicSimilarity], result of:
          0.10059894 = score(doc=3275,freq=6.0), product of:
            0.11669745 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.025905682 = queryNorm
            0.86204916 = fieldWeight in 3275, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=3275)
        0.10059894 = weight(_text_:informationswissenschaft in 3275) [ClassicSimilarity], result of:
          0.10059894 = score(doc=3275,freq=6.0), product of:
            0.11669745 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.025905682 = queryNorm
            0.86204916 = fieldWeight in 3275, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=3275)
        0.11887591 = weight(_text_:informationspraxis in 3275) [ClassicSimilarity], result of:
          0.11887591 = score(doc=3275,freq=2.0), product of:
            0.16695212 = queryWeight, product of:
              6.444614 = idf(docFreq=190, maxDocs=44218)
              0.025905682 = queryNorm
            0.71203595 = fieldWeight in 3275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.444614 = idf(docFreq=190, maxDocs=44218)
              0.078125 = fieldNorm(doc=3275)
      0.22222222 = coord(4/18)
    
    Content
    Vortrag vor der Bundesfachschaftstagung Information und Dokumentation auf dem Jahrestreffen am 7. Juni 1996 in Saarbrücken. Mit einem historischen Abriss der Entwicklung der Informationswissenschaft in Deutschland.
    Field
    Informationswissenschaft
  2. Deutsch Korrekt : Das Prüfprogramm für Texte (1996) 0.02
    0.02090202 = product of:
      0.12541212 = sum of:
        0.023860507 = weight(_text_:und in 5968) [ClassicSimilarity], result of:
          0.023860507 = score(doc=5968,freq=4.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.41556883 = fieldWeight in 5968, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=5968)
        0.08738473 = weight(_text_:automatisches in 5968) [ClassicSimilarity], result of:
          0.08738473 = score(doc=5968,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.6687494 = fieldWeight in 5968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.09375 = fieldNorm(doc=5968)
        0.01416689 = product of:
          0.042500667 = sum of:
            0.042500667 = weight(_text_:29 in 5968) [ClassicSimilarity], result of:
              0.042500667 = score(doc=5968,freq=2.0), product of:
                0.09112809 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025905682 = queryNorm
                0.46638384 = fieldWeight in 5968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5968)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    Automatisches Prüfprogramm zur deutschen Rechtschreibung; kontrolliert die Rechtschreibung nach den neuen Regeln - inklusive korrekter Silbentrennungen, Wortzusammensetzungen und Wortableitungen
    Date
    21.12.1996 10:23:29
    Issue
    Für Windows 3.x und Windows95
  3. Oehler, A.: Informationssuche im Internet : In welchem Ausmaß entsprechen existierende Suchwerkzeuge für das World Wide Web Anforderungen für die wissenschaftliche Suche (1998) 0.02
    0.016393319 = product of:
      0.09835991 = sum of:
        0.017046768 = weight(_text_:und in 826) [ClassicSimilarity], result of:
          0.017046768 = score(doc=826,freq=6.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.2968967 = fieldWeight in 826, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=826)
        0.040656574 = weight(_text_:informationswissenschaft in 826) [ClassicSimilarity], result of:
          0.040656574 = score(doc=826,freq=2.0), product of:
            0.11669745 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.025905682 = queryNorm
            0.348393 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=826)
        0.040656574 = weight(_text_:informationswissenschaft in 826) [ClassicSimilarity], result of:
          0.040656574 = score(doc=826,freq=2.0), product of:
            0.11669745 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.025905682 = queryNorm
            0.348393 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=826)
      0.16666667 = coord(3/18)
    
    Abstract
    Im Internet steht inzwischen eine enorme und ständig wachsende Menge von Dokumenten zur Verfügung. Es wird daher auch für wissenschaftliche Informationssuchende neben traditionellen Printveröffentlichungen zu einer wichtigen Informationsquelle. Im Gegensatz zur relativ geordneten Welt der gedruckten Publikationen ist es jedoch schwierig, gezielt Informationen im Internet zu finden. Ursachen dafür sind neben sehr unterschiedlicher Qualität und Zielsetzung auch Charakteristika von Internetdokumenten, wie z.B. die Darstellung als offener, verteilter Hypertext oder ihre leichte Veränderbarkeit. In der vorliegenden Arbeit wird untersucht, inwieweit die gegenwärtigen Suchdienste für das WWW Anforderungen an die wissenschaftliche Informationssuche entsprechen. Dazu wird ein Überblick über die Arbeitsweisen sowie generelle Stärken und Schwächen verschiedener Typen von Suchdiensten (roboterbasierte, manuell erstellt sowie simultane) gegeben
    Footnote
    Magisterarbeit im Fach Informationswissenschaft an der Freien Universität Berlin
  4. Wätjen, H.-J.: Automatisches Sammeln, Klassifizieren und Indexieren von wissenschaftlich relevanten Informationsressourcen im deutschen World Wide Web : das DFG-Projekt GERHARD (1998) 0.01
    0.013004871 = product of:
      0.11704384 = sum of:
        0.0140599385 = weight(_text_:und in 3066) [ClassicSimilarity], result of:
          0.0140599385 = score(doc=3066,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.24487628 = fieldWeight in 3066, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
        0.1029839 = weight(_text_:automatisches in 3066) [ClassicSimilarity], result of:
          0.1029839 = score(doc=3066,freq=4.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.78812873 = fieldWeight in 3066, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.078125 = fieldNorm(doc=3066)
      0.11111111 = coord(2/18)
    
    Theme
    Automatisches Klassifizieren
  5. Subramanian, S.; Shafer, K.E.: Clustering (1998) 0.01
    0.012751023 = product of:
      0.11475921 = sum of:
        0.07282061 = weight(_text_:automatisches in 1103) [ClassicSimilarity], result of:
          0.07282061 = score(doc=1103,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.55729115 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
        0.041938595 = weight(_text_:indexing in 1103) [ClassicSimilarity], result of:
          0.041938595 = score(doc=1103,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.42292362 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
      0.11111111 = coord(2/18)
    
    Abstract
    This article presents our exploration of computer science clustering algorithms as they relate to the Scorpion system. Scorpion is a research project at OCLC that explores the indexing and cataloging of electronic resources. For a more complete description of the Scorpion, please visit the Scorpion Web site at <http://purl.oclc.org/scorpion>
    Theme
    Automatisches Klassifizieren
  6. Guinness Multimedia CD-ROM der Rekorde : die ganze Welt der Superlative (1996) 0.01
    0.01169036 = product of:
      0.10521323 = sum of:
        0.08573121 = weight(_text_:buch in 4391) [ClassicSimilarity], result of:
          0.08573121 = score(doc=4391,freq=6.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.71178657 = fieldWeight in 4391, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0625 = fieldNorm(doc=4391)
        0.019482022 = weight(_text_:und in 4391) [ClassicSimilarity], result of:
          0.019482022 = score(doc=4391,freq=6.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.33931053 = fieldWeight in 4391, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4391)
      0.11111111 = coord(2/18)
    
    Content
    Über 5.000 aktuellste Rekorde und Topleistungen in Text, Ton, Bild und Video; 60 Originalaufnahmen von Rekorderfolgen und Höchstleistungen
    Footnote
    Mit der Beilage: Das neue Guiness Buch der Spass Rekorde. Frankfurt/M.: Ullstein 1994. 140 S. (Ullstein Buch; Nr.23302) ISBN 3-548-23302-3
    Object
    Guinness Buch der Rekorde
  7. Ingenerf, J.: Literatur zum Thema Terminologie (1993) 0.01
    0.010446441 = product of:
      0.09401797 = sum of:
        0.07453594 = weight(_text_:allgemeines in 3183) [ClassicSimilarity], result of:
          0.07453594 = score(doc=3183,freq=2.0), product of:
            0.14780287 = queryWeight, product of:
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.025905682 = queryNorm
            0.5042929 = fieldWeight in 3183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.705423 = idf(docFreq=399, maxDocs=44218)
              0.0625 = fieldNorm(doc=3183)
        0.019482022 = weight(_text_:und in 3183) [ClassicSimilarity], result of:
          0.019482022 = score(doc=3183,freq=6.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.33931053 = fieldWeight in 3183, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3183)
      0.11111111 = coord(2/18)
    
    Content
    Enthält Literaturangaben zu folgenden Themen: Allgemeines, insbesondere bzgl. Institutionen zum Thema Terminologie // Konkrete Ordnungssysteme / Wörterbücher: Medizin // Konkrete Ordnungssysteme / Wörterbücher: Informatik // Grundlagen // Prinzipien der Erstellung eines Wörterbuches // Terminologie und NLP: im allgemeinen bzw. in der Medizin // Terminologie und KI: im allgemeinen bzw. in der Medizin // Terminologie und konzeptuelle Modellierung: Ontologie // Normen // Rechnergestützte bzw. formal rekonstruierte Terminologie // Standardisierung, Sharing, Reuse // Werkzeuge zur Wörterbucherstellung
  8. Meine Traumburg : der Geschichten-Baukasten für kreative Kinder (1995) 0.01
    0.009999004 = product of:
      0.08999104 = sum of:
        0.061871164 = weight(_text_:buch in 5474) [ClassicSimilarity], result of:
          0.061871164 = score(doc=5474,freq=2.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.5136877 = fieldWeight in 5474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.078125 = fieldNorm(doc=5474)
        0.028119877 = weight(_text_:und in 5474) [ClassicSimilarity], result of:
          0.028119877 = score(doc=5474,freq=8.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.48975256 = fieldWeight in 5474, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5474)
      0.11111111 = coord(2/18)
    
    Abstract
    Wie bei einem Baukasten können die Kinder Figuren auswählen, sie in bunten Szenarien plazieren und mit Hilfe leicht verständlicher Symbole die Handlung und das Zusammenspiel der Figuren steuern. Labyrinthe und Hindernisläufe helfen ihrem Kind spielerisch, Probleme zu lösen und die Merkfähigkeit zu entwickeln
    Pages
    32 S. (Buch) + 1 CD
  9. Rötzer, F.: Sahra Wagenknecht über die Digitalisierung (1999) 0.01
    0.009250735 = product of:
      0.08325662 = sum of:
        0.061249334 = weight(_text_:buch in 3951) [ClassicSimilarity], result of:
          0.061249334 = score(doc=3951,freq=4.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.5085249 = fieldWeight in 3951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3951)
        0.022007287 = weight(_text_:und in 3951) [ClassicSimilarity], result of:
          0.022007287 = score(doc=3951,freq=10.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.38329202 = fieldWeight in 3951, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3951)
      0.11111111 = coord(2/18)
    
    Abstract
    Florian Rötzer hat in einem langen Gespräch mit Sahra Wagenknecht, aus dem das Buch "Couragiert gegen den Strom. Über Goethe, die Macht und die Zukunft!" hervorgegangen ist, u.a. darüber gesprochen, wie Kultur und philosophisches Denken die politischen Vorstellungen und den politischen Stil der linken Politikerin geprägt haben. Dabei ging es auch um den Kapitalismus und dessen Abschaffung, um den Kern linker Politik, die Konkurrenz in der Wirtschaft und auch über die Digitalisierung sowie die Ideen, mit einer Maschinensteuer oder einem bedingungslosen Grundeinkommen das Schlimmste zu verhindern. Telepolis veröffentlicht einen Auszug aus dem Buch, das im Westendverlag erschienen ist.
  10. Neues großes Lexikon in Farbe : Von A-Z (1995) 0.01
    0.00908388 = product of:
      0.08175492 = sum of:
        0.061871164 = weight(_text_:buch in 1202) [ClassicSimilarity], result of:
          0.061871164 = score(doc=1202,freq=2.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.5136877 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.078125 = fieldNorm(doc=1202)
        0.019883756 = weight(_text_:und in 1202) [ClassicSimilarity], result of:
          0.019883756 = score(doc=1202,freq=4.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.34630734 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1202)
      0.11111111 = coord(2/18)
    
    Content
    "Über 50.000 aktuelle Stichwörter aus allen Wissensgebieten mit über 1.500 Abbildugen bieten universelle und übersichtliche Informationen von A-Z. Das besonders anwendungsfreundliche Lexikon mit den Vorteilen der leistungsstarken Benutzeroberfläche Windows: Schneller und gezielter Zugriff auf die gewünschten Informationen ohne langes Suchen - das kann kein Buch leisten!"
  11. ¬Das große Data Becker Lexikon (1995) 0.01
    0.00908388 = product of:
      0.08175492 = sum of:
        0.061871164 = weight(_text_:buch in 5368) [ClassicSimilarity], result of:
          0.061871164 = score(doc=5368,freq=2.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.5136877 = fieldWeight in 5368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.078125 = fieldNorm(doc=5368)
        0.019883756 = weight(_text_:und in 5368) [ClassicSimilarity], result of:
          0.019883756 = score(doc=5368,freq=4.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.34630734 = fieldWeight in 5368, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5368)
      0.11111111 = coord(2/18)
    
    Abstract
    Universallexikon mit Videos, Fotos, Animationen, Musik und mehr; CD 1: Lexikon-CD; CD 2: VideoPlus-Bonus-CD
    Content
    Über 50.000 Stichwörter; über 90 Min Videos; 2.200 Fotos, Diagramme, Grafiken und Zeichnungen; 90 Min. Musik, Tierstimmen, Redewendungen; Lexikon-Quiz mit 400 Fragen
    Pages
    86 S. (Buch) + 2 CDs
  12. Endres-Niggemeyer, B.: Bessere Information durch Zusammenfassen aus dem WWW (1999) 0.01
    0.008637612 = product of:
      0.07773851 = sum of:
        0.019482022 = weight(_text_:und in 4496) [ClassicSimilarity], result of:
          0.019482022 = score(doc=4496,freq=6.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.33931053 = fieldWeight in 4496, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4496)
        0.05825649 = weight(_text_:automatisches in 4496) [ClassicSimilarity], result of:
          0.05825649 = score(doc=4496,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.44583294 = fieldWeight in 4496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.0625 = fieldNorm(doc=4496)
      0.11111111 = coord(2/18)
    
    Abstract
    Am Beispiel der Knochenmarktransplantation, eines medizinischen Spezialgebietes, wird im folgenden dargelegt, wie man BenutzerInnen eine großen Teil des Aufwandes bei der Wissensbeschaffung abnehmen kann, indem man Suchergebnisse aus dem Netz fragebezogen zusammenfaßt. Dadurch wird in zeitkritischen Situationen, wie sie in Diagnose und Therapie alltäglich sind, die Aufnahme neuen Wissens ermöglicht. Auf einen Überblick über den Stand des Textzusammenfassens und der Ontologieentwicklung folgt eine Systemskizze, in der die Informationssuche im WWW durch ein kognitiv fundiertes Zusammenfassungssystem ergänzt wird. Dazu wird eine Fach-Ontologie vorgeschlagen, die das benötigte Wissen organisiert und repräsentiert.
    Theme
    Automatisches Abstracting
  13. Chan, L.M.; Lin, X.; Zeng, M.: Structural and multilingual approaches to subject access on the Web (1999) 0.01
    0.0077227154 = product of:
      0.06950444 = sum of:
        0.011247951 = weight(_text_:und in 162) [ClassicSimilarity], result of:
          0.011247951 = score(doc=162,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.19590102 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
        0.05825649 = weight(_text_:automatisches in 162) [ClassicSimilarity], result of:
          0.05825649 = score(doc=162,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.44583294 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
      0.11111111 = coord(2/18)
    
    Abstract
    Zu den großen Herausforderungen einer sinnvollen Suche im WWW gehören die riesige Menge des Verfügbaren und die Sparchbarrieren. Verfahren, die die Web-Ressourcen im Hinblick auf ein effizienteres Retrieval inhaltlich strukturieren, werden daher ebenso dringend benötigt wie Programme, die mit der Sprachvielfalt umgehen können. Im folgenden Vortrag werden wir einige Ansätze diskutieren, die zur Bewältigung der beiden Probleme derzeit unternommen werden
    Theme
    Automatisches Klassifizieren
  14. Chronik der Technik : die technischen Errungenschaften von 3000 v.Chr. bis heute (1995) 0.01
    0.007664329 = product of:
      0.06897896 = sum of:
        0.049496934 = weight(_text_:buch in 4926) [ClassicSimilarity], result of:
          0.049496934 = score(doc=4926,freq=2.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.41095015 = fieldWeight in 4926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0625 = fieldNorm(doc=4926)
        0.019482022 = weight(_text_:und in 4926) [ClassicSimilarity], result of:
          0.019482022 = score(doc=4926,freq=6.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.33931053 = fieldWeight in 4926, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4926)
      0.11111111 = coord(2/18)
    
    Content
    Ca. 4.000 Einträge zu technischen Entwicklungen der letzten 5.000 Jahre; ca. 2.000 ausführliche Textbeiträge; Nobelpreisträger Physik und Chemie; Naturwissenschaftler-Lexikon; ca. 2.000 Fotos, Grafiken, Porträts und Karten; ca. 30 min Video; über 30 Animationen, ca. 30 min Diavorträge; Quiz; Technikglossar
    Footnote
    Rez. in: Spektrum der Wissenschaft 1996, H.9, S.124-125 (G. Wolfschmidt) "Wenn die Animationen, die bewegten Bilder und das Quiz durchgespielt sind, blättert man doch lieber wieder im Buch"
  15. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.01
    0.007650614 = product of:
      0.068855524 = sum of:
        0.043692365 = weight(_text_:automatisches in 3390) [ClassicSimilarity], result of:
          0.043692365 = score(doc=3390,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.3343747 = fieldWeight in 3390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.046875 = fieldNorm(doc=3390)
        0.02516316 = weight(_text_:indexing in 3390) [ClassicSimilarity], result of:
          0.02516316 = score(doc=3390,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.2537542 = fieldWeight in 3390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.046875 = fieldNorm(doc=3390)
      0.11111111 = coord(2/18)
    
    Abstract
    The automated categorisation (or classification) of texts into topical categories has a long history, dating back at least to 1960. Until the late '80s, the dominant approach to the problem involved knowledge-engineering automatic categorisers, i.e. manually building a set of rules encoding expert knowledge an how to classify documents. In the '90s, with the booming production and availability of on-line documents, automated text categorisation has witnessed an increased and renewed interest. A newer paradigm based an machine learning has superseded the previous approach. Within this paradigm, a general inductive process automatically builds a classifier by "learning", from a set of previously classified documents, the characteristics of one or more categories; the advantages are a very good effectiveness, a considerable savings in terms of expert manpower, and domain independence. In this tutorial we look at the main approaches that have been taken towards automatic text categorisation within the general machine learning paradigm. Issues of document indexing, classifier construction, and classifier evaluation, will be touched upon.
    Theme
    Automatisches Klassifizieren
  16. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.01
    0.0074043632 = product of:
      0.06663927 = sum of:
        0.029128244 = weight(_text_:automatisches in 1669) [ClassicSimilarity], result of:
          0.029128244 = score(doc=1669,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.22291647 = fieldWeight in 1669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
        0.037511025 = weight(_text_:indexing in 1669) [ClassicSimilarity], result of:
          0.037511025 = score(doc=1669,freq=10.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.3782744 = fieldWeight in 1669, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
      0.11111111 = coord(2/18)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
    Theme
    Automatisches Klassifizieren
  17. Koch, T.; Vizine-Goetz, D.: DDC and knowledge organization in the digital library : Research and development. Demonstration pages (1999) 0.01
    0.006729366 = product of:
      0.06056429 = sum of:
        0.016871925 = weight(_text_:und in 942) [ClassicSimilarity], result of:
          0.016871925 = score(doc=942,freq=8.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.29385152 = fieldWeight in 942, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=942)
        0.043692365 = weight(_text_:automatisches in 942) [ClassicSimilarity], result of:
          0.043692365 = score(doc=942,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.3343747 = fieldWeight in 942, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.046875 = fieldNorm(doc=942)
      0.11111111 = coord(2/18)
    
    Abstract
    Der Workshop gibt einen Einblick in die aktuelle Forschung und Entwicklung zur Wissensorganisation in digitalen Bibliotheken. Diane Vizine-Goetz vom OCLC Office of Research in Dublin, Ohio, stellt die Forschungsprojekte von OCLC zur Anpassung und Weiterentwicklung der Dewey Decimal Classification als Wissensorganisationsinstrument fuer grosse digitale Dokumentensammlungen vor. Traugott Koch, NetLab, Universität Lund in Schweden, demonstriert die Ansätze und Lösungen des EU-Projekts DESIRE zum Einsatz von intellektueller und vor allem automatischer Klassifikation in Fachinformationsdiensten im Internet.
    Theme
    Automatisches Klassifizieren
  18. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.01
    0.006441 = product of:
      0.057969 = sum of:
        0.04119356 = weight(_text_:automatisches in 2596) [ClassicSimilarity], result of:
          0.04119356 = score(doc=2596,freq=4.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.3152515 = fieldWeight in 2596, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.01677544 = weight(_text_:indexing in 2596) [ClassicSimilarity], result of:
          0.01677544 = score(doc=2596,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.16916946 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.11111111 = coord(2/18)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
    Theme
    Automatisches Klassifizieren
    Automatisches Indexieren
  19. Einfache Zählübungen mit Archibald : Rechnen bis 20 (1996) 0.01
    0.0054996596 = product of:
      0.09899387 = sum of:
        0.09899387 = weight(_text_:buch in 5377) [ClassicSimilarity], result of:
          0.09899387 = score(doc=5377,freq=2.0), product of:
            0.1204451 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.025905682 = queryNorm
            0.8219003 = fieldWeight in 5377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.125 = fieldNorm(doc=5377)
      0.055555556 = coord(1/18)
    
    Pages
    20 S. (Buch) + 1 CD
  20. Adreßbuch deutscher Bibliotheken : Datenbankversion auf Diskette (1996) 0.01
    0.0050272928 = product of:
      0.045245633 = sum of:
        0.022495901 = weight(_text_:und in 3955) [ClassicSimilarity], result of:
          0.022495901 = score(doc=3955,freq=8.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.39180204 = fieldWeight in 3955, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3955)
        0.022749731 = product of:
          0.045499463 = sum of:
            0.045499463 = weight(_text_:bibliothekswesen in 3955) [ClassicSimilarity], result of:
              0.045499463 = score(doc=3955,freq=2.0), product of:
                0.11547904 = queryWeight, product of:
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.025905682 = queryNorm
                0.39400625 = fieldWeight in 3955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3955)
          0.5 = coord(1/2)
      0.11111111 = coord(2/18)
    
    Abstract
    Das Adreßbuch soll eine relativ schnelle und Aktuelle Quelle für Anschriften von Bibliotheken und für das Bibliothekswesen relevanten Einrichtungen sein
    Content
    In der Ausgabe 1995/96 sind insgesamt 5.211 Bibliotheken verzeichnet, und zwar folgende Bibliothekstypen: Nationalbibliotheken, Zentrale Fachbibliotheken, Regionalbibliotheken, Universitätsbibliotheken, Hoch- und Fachhochschulbibliotheken, Öffentliche Bibliotheken mit hauptamtlichem Personal, Wissenschaftliche Spezialbibliotheken mit mehr als 5.000 Bestandseinheiten, alle Bibliotheken, die auch im Sigelverzeichnis für die Bundesrepublik Deutschland verzeichnet werden

Authors

Languages

  • d 106
  • e 71
  • m 1
  • nl 1
  • More… Less…

Types

  • i 60
  • a 20
  • b 11
  • m 9
  • r 6
  • s 1
  • x 1
  • More… Less…