Search (35 results, page 1 of 2)

  • × theme_ss:"Automatisches Indexieren"
  • × year_i:[1980 TO 1990}
  1. Thönssen, B.: Automatische Indexierung und Schnittstellen zu Thesauri (1988) 0.38
    0.37662664 = product of:
      0.45195198 = sum of:
        0.06281625 = weight(_text_:und in 30) [ClassicSimilarity], result of:
          0.06281625 = score(doc=30,freq=12.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.5998219 = fieldWeight in 30, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
        0.12236831 = weight(_text_:anwendung in 30) [ClassicSimilarity], result of:
          0.12236831 = score(doc=30,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.5349128 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
        0.040036436 = weight(_text_:des in 30) [ClassicSimilarity], result of:
          0.040036436 = score(doc=30,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.30596817 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
        0.17099062 = weight(_text_:prinzips in 30) [ClassicSimilarity], result of:
          0.17099062 = score(doc=30,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.6323167 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.078125 = fieldNorm(doc=30)
        0.055740345 = product of:
          0.11148069 = sum of:
            0.11148069 = weight(_text_:thesaurus in 30) [ClassicSimilarity], result of:
              0.11148069 = score(doc=30,freq=2.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5105618 = fieldWeight in 30, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.078125 = fieldNorm(doc=30)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    Über eine Schnittstelle zwischen Programmen zur automatischen Indexierung (PRIMUS-IDX) und zur maschinellen Thesaurusverwaltung (INDEX) sollen große Textmengen schnell, kostengünstig und konsistent erschlossen und verbesserte Recherchemöglichkeiten geschaffen werden. Zielvorstellung ist ein Verfahren, das auf PCs ablauffähig ist und speziell deutschsprachige Texte bearbeiten kann
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Liedloff, V.: Anwendung eines existenten Klassifikationssystems im Bereich der computerunterstützten Inhaltsanalyse (1985) 0.14
    0.13776785 = product of:
      0.20665178 = sum of:
        0.025386883 = weight(_text_:und in 2921) [ClassicSimilarity], result of:
          0.025386883 = score(doc=2921,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.24241515 = fieldWeight in 2921, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.08565781 = weight(_text_:anwendung in 2921) [ClassicSimilarity], result of:
          0.08565781 = score(doc=2921,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.37443897 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.028025504 = weight(_text_:des in 2921) [ClassicSimilarity], result of:
          0.028025504 = score(doc=2921,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.2141777 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2921)
        0.06758159 = product of:
          0.13516317 = sum of:
            0.13516317 = weight(_text_:thesaurus in 2921) [ClassicSimilarity], result of:
              0.13516317 = score(doc=2921,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.6190234 = fieldWeight in 2921, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2921)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    In universitärer Grundlagenforschung wurde das Computergestützte TeXterschließungssystem (CTX) entwickelt. Es ist ein wörterbuchorientiertes Verfahren, das aufbauend auf einer wort- und satzorientierten Verarbeitung von Texten zu einem deutschsprachigen Text/ Dokument formal-inhaltliche Stichwörter (Grundformen, systemintern "Deskriptoren" genannt) erstellt. Diese dienen als Input für die Computer-Unterstützte Inhaltsanalyse (CUI). Mit Hilfe eines Thesaurus werden die Deskriptoren zu Oberbegriffen zusammengefaßt und die durch CTX erstellte Deskriptorliste über eine Vergleichsliste auf die Kategorien (=Oberbegriffe) des Thesaurus abgebildet. Das Ergebnis wird über mathematisch-statistische Auswertungsverfahren weiterverarbeitet. Weitere Vorteile der Einbringung eines Thesaurus werden genannt
  3. Biebricher, P.; Fuhr, N.; Knorz, G.; Lustig, G.; Schwandtner, M.: Entwicklung und Anwendung des automatischen Indexierungssystems AIR/PHYS (1988) 0.12
    0.11901824 = product of:
      0.23803648 = sum of:
        0.035534237 = weight(_text_:und in 2320) [ClassicSimilarity], result of:
          0.035534237 = score(doc=2320,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.33931053 = fieldWeight in 2320, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2320)
        0.13844395 = weight(_text_:anwendung in 2320) [ClassicSimilarity], result of:
          0.13844395 = score(doc=2320,freq=4.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.6051848 = fieldWeight in 2320, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0625 = fieldNorm(doc=2320)
        0.0640583 = weight(_text_:des in 2320) [ClassicSimilarity], result of:
          0.0640583 = score(doc=2320,freq=8.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.48954904 = fieldWeight in 2320, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=2320)
      0.5 = coord(3/6)
    
    Abstract
    Seit 1985 wird das automatische Indexierungssystem AIR/PHYS in der Inputproduktion der Physik-Datenbank PHYS des Fachinformationszentrums Karlsruhe angewandt. Das AIR/PHYS-System teilt englischsprachigen Referatetexten Deskriptoren aus einem vorgeschriebenen Vokabular zu. In der vorliegenden Arbeit werden der zugrundeliegende fehlertolerierende Ansatz, der Aufbau des Systems und die wichtigsten Verfahren zur Entwicklung eines großen Indexierungswörterbuches beschrieben. Ferner werden Probleme der Anwendung und Weiterentwicklung des Systems behandelt
  4. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.11
    0.1080132 = product of:
      0.2160264 = sum of:
        0.030773548 = weight(_text_:und in 2051) [ClassicSimilarity], result of:
          0.030773548 = score(doc=2051,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.29385152 = fieldWeight in 2051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=2051)
        0.14684197 = weight(_text_:anwendung in 2051) [ClassicSimilarity], result of:
          0.14684197 = score(doc=2051,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.6418954 = fieldWeight in 2051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.09375 = fieldNorm(doc=2051)
        0.03841088 = product of:
          0.07682176 = sum of:
            0.07682176 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.07682176 = score(doc=2051,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Date
    14. 6.2015 22:12:56
    Source
    Automatische Indexierung zwischen Forschung und Anwendung, Hrsg.: G. Lustig
  5. Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986) 0.10
    0.10341127 = product of:
      0.20682254 = sum of:
        0.044417795 = weight(_text_:und in 4040) [ClassicSimilarity], result of:
          0.044417795 = score(doc=4040,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.42413816 = fieldWeight in 4040, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4040)
        0.12236831 = weight(_text_:anwendung in 4040) [ClassicSimilarity], result of:
          0.12236831 = score(doc=4040,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.5349128 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.078125 = fieldNorm(doc=4040)
        0.040036436 = weight(_text_:des in 4040) [ClassicSimilarity], result of:
          0.040036436 = score(doc=4040,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.30596817 = fieldWeight in 4040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=4040)
      0.5 = coord(3/6)
    
    Abstract
    Der Beitrag enthält eine Darstellung zur Durchführung und zu den Ergebnissen des Retrievaltests zum AIR/PHYS-Projekt. Er zählt mit seinen 309 Fragen und 15.000 Dokumenten zu den größten Retrievaltests, die bisher zur Evaluierung automatisierter Indexierungs- oder Retrievalverfahren vorgenommen wurden.
    Source
    Automatische Indexierung zwischen Forschung und Anwendung, Hrsg.: G. Lustig
  6. Automatische Indexierung zwischen Forschung und Anwendung (1986) 0.09
    0.092533216 = product of:
      0.18506643 = sum of:
        0.035902474 = weight(_text_:und in 953) [ClassicSimilarity], result of:
          0.035902474 = score(doc=953,freq=8.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.34282678 = fieldWeight in 953, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=953)
        0.12113845 = weight(_text_:anwendung in 953) [ClassicSimilarity], result of:
          0.12113845 = score(doc=953,freq=4.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.5295367 = fieldWeight in 953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0546875 = fieldNorm(doc=953)
        0.028025504 = weight(_text_:des in 953) [ClassicSimilarity], result of:
          0.028025504 = score(doc=953,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.2141777 = fieldWeight in 953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=953)
      0.5 = coord(3/6)
    
    Abstract
    Die automatische Indexierung von Dokumenten für das Information Retrieval, d. h. die automatische Charakterisierung von Dokumentinhalten mittels Deskriptoren (Schlagwörtern) ist bereits seit über 25 Jahren ein Gebiet theoretischer und experimenteller Forschung. Dagegen wurde erst im Oktober 1985 mit der Anwendung der automatischen Indexierung in der Inputproduktion für ein großes Retrievalsystem begonnen. Es handelt sich um die Indexierung englischer Referatetexte für die Physik-Datenbasis des Informationszentrums Energie, Physik, Mathematik GmbH in Karlsruhe. In dem vorliegenden Buch beschreiben Mitarbeiter der Technischen Hochschule Darmstadt ihre Forschungs- und Entwicklungsarbeiten, die zu dieser Pilotanwendung geführt haben.
    Footnote
    Rez. in: Zeitschrift für Bibliothekswesen und Bibliographie 35(1988) S.508-510 (W. Gödert)
  7. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.04
    0.044476297 = product of:
      0.13342889 = sum of:
        0.035534237 = weight(_text_:und in 2322) [ClassicSimilarity], result of:
          0.035534237 = score(doc=2322,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.33931053 = fieldWeight in 2322, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2322)
        0.097894646 = weight(_text_:anwendung in 2322) [ClassicSimilarity], result of:
          0.097894646 = score(doc=2322,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.42793027 = fieldWeight in 2322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0625 = fieldNorm(doc=2322)
      0.33333334 = coord(2/6)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  8. Schwantner, M.: Entwicklung und Pflege des Indexierungswörterbuches PHYS/PILOT (1988) 0.03
    0.033781692 = product of:
      0.10134508 = sum of:
        0.053301353 = weight(_text_:und in 527) [ClassicSimilarity], result of:
          0.053301353 = score(doc=527,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.5089658 = fieldWeight in 527, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=527)
        0.04804372 = weight(_text_:des in 527) [ClassicSimilarity], result of:
          0.04804372 = score(doc=527,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.36716178 = fieldWeight in 527, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.09375 = fieldNorm(doc=527)
      0.33333334 = coord(2/6)
    
    Source
    Von der Information zum Wissen - vom Wissen zur Information: traditionelle und moderne Informationssysteme für Wissenschaft und Praxis, Deutscher Dokumentartag 1987, Bad Dürkheim, vom 23.-25.9.1987. Hrsg.: H. Strohl-Goebel
  9. Schwarz, C.: Komplexe Nominalgruppen als Indexierungseinheiten am Beispiel des Projekte CONDOR (1982) 0.03
    0.030651163 = product of:
      0.091953486 = sum of:
        0.035902474 = weight(_text_:und in 435) [ClassicSimilarity], result of:
          0.035902474 = score(doc=435,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.34282678 = fieldWeight in 435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=435)
        0.05605101 = weight(_text_:des in 435) [ClassicSimilarity], result of:
          0.05605101 = score(doc=435,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=435)
      0.33333334 = coord(2/6)
    
    Source
    Deutscher Dokumentartag 1981, Mainz, 5.-8.10.1981: Kleincomputer in Information und Dokumentation. Bearb.: H. Strohl-Goebel
  10. Zimmermann, H.: Automatische Indexierung: Entwicklung und Perspektiven (1983) 0.03
    0.030336782 = product of:
      0.09101035 = sum of:
        0.035534237 = weight(_text_:und in 2318) [ClassicSimilarity], result of:
          0.035534237 = score(doc=2318,freq=6.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.33931053 = fieldWeight in 2318, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2318)
        0.055476114 = weight(_text_:des in 2318) [ClassicSimilarity], result of:
          0.055476114 = score(doc=2318,freq=6.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.42396194 = fieldWeight in 2318, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=2318)
      0.33333334 = coord(2/6)
    
    Abstract
    Die Automatische Indexierung als ein Teilgebiet der Inhaltserschließung wird inzwischen in einer Reihe von Gebieten, vor allem in der Fachinformation und Kommunikation praktisch eingesetzt. Dabei dominieren äußerst einfache Systeme, die (noch) erhebliche Anpassungen des Benutzers an die jeweilige Systemstrategie voraussetzen. Unter Berücksichtigung des Konzepts der Einheit von Informationserschließung und -retrieval werden höherwertige ("intelligentere") Verfahren vorgestellt, die der Entlastung des Informationssuchenden wie auch der Verbesserung der Rechercheergebnisse dienen sollen
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.026905056 = product of:
      0.080715165 = sum of:
        0.035902474 = weight(_text_:und in 262) [ClassicSimilarity], result of:
          0.035902474 = score(doc=262,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.34282678 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=262)
        0.044812694 = product of:
          0.08962539 = sum of:
            0.08962539 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.08962539 = score(doc=262,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    20.10.2000 12:22:23
    Source
    Deutscher Dokumentartag 1983, Göttingen, 3.-7.10.1983: Fachinformation und Bildschirmtext. Bearb.: H. Strohl-Goebel
  12. Fischer, H.G.: CONDOR: Modell eines integrierten DB-/IR-Systems für strukturierte und unstrukturierte Daten (1982) 0.02
    0.021937251 = product of:
      0.06581175 = sum of:
        0.0205157 = weight(_text_:und in 5197) [ClassicSimilarity], result of:
          0.0205157 = score(doc=5197,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.19590102 = fieldWeight in 5197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=5197)
        0.045296054 = weight(_text_:des in 5197) [ClassicSimilarity], result of:
          0.045296054 = score(doc=5197,freq=4.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.34616345 = fieldWeight in 5197, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=5197)
      0.33333334 = coord(2/6)
    
    Abstract
    CONDOR ist ein Modell eines modularen, integrierten DB-/IR-Systems, mit dem sowohl strukturierte als auch unstrukturierte Daten (Textdaten) verarbeiet werden können. Die abzuspeichernden Informationen werden weitgehend automatich erschlossen. Da ein breiter Benutzerkreis Zugang zum System haben soll, sind verschiedene Dialogformen (Kommando, natürlichsprachlich, Formular, Menü) implementiert. Es wird versucht, sie in einer systematischen Oberflächengestaltung des Systems zusammenzuführen, um eine möglichst einfache Bedienung für den einzelnen Benutzer bei hoher Nutzungsflexibilität des Systems zu erreichen
  13. Fuhr, N.: Klassifikationsverfahren bei der automatischen Indexierung (1983) 0.02
    0.020347577 = product of:
      0.06104273 = sum of:
        0.029013582 = weight(_text_:und in 7697) [ClassicSimilarity], result of:
          0.029013582 = score(doc=7697,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.27704588 = fieldWeight in 7697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=7697)
        0.03202915 = weight(_text_:des in 7697) [ClassicSimilarity], result of:
          0.03202915 = score(doc=7697,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.24477452 = fieldWeight in 7697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=7697)
      0.33333334 = coord(2/6)
    
    Abstract
    Nach einer kurzen Einführung in die Darmstädter Projekte WAI und AIR werden die folgenden Themen behandelt: Ein Ansatz zur automatischen Klassifikation. Statistische Relationen für die Klassifikation. Indexieren von Dokumenten als Spezialfall der automatischen Klassifikation. Klassifikation von Elementen der Relevanzbeschreibung. Klassifikation zur Verbesserung der Relevanzbeschreibungen. Automatische Dokumentklassifikation und Automatische Indexierung klassifizierter Dokumente. Das Projekt AIR wird in Zusammenarbeit mit der Datenbasis INKA-PHYS des Fachinformationszentrums Energie, Physik, Mathematik in Karlsruhe durchgeführt
  14. Salton, G.: Automatic processing of foreign language documents (1985) 0.02
    0.018210873 = product of:
      0.05463262 = sum of:
        0.016014574 = weight(_text_:des in 3650) [ClassicSimilarity], result of:
          0.016014574 = score(doc=3650,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 3650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=3650)
        0.038618047 = product of:
          0.07723609 = sum of:
            0.07723609 = weight(_text_:thesaurus in 3650) [ClassicSimilarity], result of:
              0.07723609 = score(doc=3650,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.35372764 = fieldWeight in 3650, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3650)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The attempt to computerize a process, such as indexing, abstracting, classifying, or retrieving information, begins with an analysis of the process into its intellectual and nonintellectual components. That part of the process which is amenable to computerization is mechanical or algorithmic. What is not is intellectual or creative and requires human intervention. Gerard Salton has been an innovator, experimenter, and promoter in the area of mechanized information systems since the early 1960s. He has been particularly ingenious at analyzing the process of information retrieval into its algorithmic components. He received a doctorate in applied mathematics from Harvard University before moving to the computer science department at Cornell, where he developed a prototype automatic retrieval system called SMART. Working with this system he and his students contributed for over a decade to our theoretical understanding of the retrieval process. On a more practical level, they have contributed design criteria for operating retrieval systems. The following selection presents one of the early descriptions of the SMART system; it is valuable as it shows the direction automatic retrieval methods were to take beyond simple word-matching techniques. These include various word normalization techniques to improve recall, for instance, the separation of words into stems and affixes; the correlation and clustering, using statistical association measures, of related terms; and the identification, using a concept thesaurus, of synonymous, broader, narrower, and sibling terms. They include, as weIl, techniques, both linguistic and statistical, to deal with the thorny problem of how to automatically extract from texts index terms that consist of more than one word. They include weighting techniques and various documentrequest matching algorithms. Significant among the latter are those which produce a retrieval output of citations ranked in relevante order. During the 1970s, Salton and his students went an to further refine these various techniques, particularly the weighting and statistical association measures. Many of their early innovations seem commonplace today. Some of their later techniques are still ahead of their time and await technological developments for implementation. The particular focus of the selection that follows is an the evaluation of a particular component of the SMART system, a multilingual thesaurus. By mapping English language expressions and their German equivalents to a common concept number, the thesaurus permitted the automatic processing of German language documents against English language queries and vice versa. The results of the evaluation, as it turned out, were somewhat inconclusive. However, this SMART experiment suggested in a bold and optimistic way how one might proceed to answer such complex questions as What is meant by retrieval language compatability? How it is to be achieved, and how evaluated?
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
  15. Stock, M.: Textwortmethode und Übersetzungsrelation : Eine Methode zum Aufbau von kombinierten Literaturnachweis- und Terminologiedatenbanken (1989) 0.01
    0.009557188 = product of:
      0.05734312 = sum of:
        0.05734312 = weight(_text_:und in 3412) [ClassicSimilarity], result of:
          0.05734312 = score(doc=3412,freq=10.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.54756 = fieldWeight in 3412, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=3412)
      0.16666667 = coord(1/6)
    
    Abstract
    Geisteswissenschaftliche Fachinformation erfordert eine enge Kooperation zwischen Literaturnachweis- und Terminologieinformationssystemen. Eine geeignete Dokumentationsmethode für die Auswertung geisteswissen- schaftlicher Literatur ist die Textwortwethode. Dem originalsprachig aufgenommenen Begriffsrepertoire ist ein einheitssprachiger Zugriff beizuordnen, der einerseits ein vollständiges und genaues Retrieval garantiert und andererseits den Aufbau fachspezifischer Wörterbücher vorantreibt
  16. Inhaltserschließung von Massendaten : zur Wirksamkeit informationslinguistischer Verfahren am Beispiel des deutschen Patentinformationssystems (1987) 0.01
    0.009341835 = product of:
      0.05605101 = sum of:
        0.05605101 = weight(_text_:des in 6764) [ClassicSimilarity], result of:
          0.05605101 = score(doc=6764,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 6764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=6764)
      0.16666667 = coord(1/6)
    
  17. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.008535751 = product of:
      0.051214505 = sum of:
        0.051214505 = product of:
          0.10242901 = sum of:
            0.10242901 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.10242901 = score(doc=402,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  18. Mohrenweis, T.: Konzepte der automatischen Indexierung und vergleichende Analyse der Systeme STAIRS, STEINADLER/CONDOR, CTX und PASSAT/GOLEM (1984) 0.01
    0.008462295 = product of:
      0.050773766 = sum of:
        0.050773766 = weight(_text_:und in 5171) [ClassicSimilarity], result of:
          0.050773766 = score(doc=5171,freq=4.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.4848303 = fieldWeight in 5171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=5171)
      0.16666667 = coord(1/6)
    
  19. Panyr, J.: Automatische Indexierung und Klassifikation (1983) 0.01
    0.008375499 = product of:
      0.050252996 = sum of:
        0.050252996 = weight(_text_:und in 7692) [ClassicSimilarity], result of:
          0.050252996 = score(doc=7692,freq=12.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.47985753 = fieldWeight in 7692, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=7692)
      0.16666667 = coord(1/6)
    
    Abstract
    Im Beitrag wird zunächst eine terminologische Klärung und Gliederung für drei Indexierungsmethoden und weitere Begriffe, die Konsistenzprobleme bei intellektueller Indexierung betreffen, unternommen. Zur automatichen Indexierung werden Extraktionsmethoden erläutert und zur Automatischen Klassifikation (Clustering) und Indexierung zwei Anwendungen vorgestellt. Eine enge Kooperation zwischen den Befürwortern der intellektuellen und den Entwicklern von automatischen Indexierungsverfahren wird empfohlen
  20. Gräbnitz, V.: PASSAT: Programm zur automatischen Selektion von Stichwörtern aus Texten (1987) 0.01
    0.008007287 = product of:
      0.04804372 = sum of:
        0.04804372 = weight(_text_:des in 932) [ClassicSimilarity], result of:
          0.04804372 = score(doc=932,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.36716178 = fieldWeight in 932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.09375 = fieldNorm(doc=932)
      0.16666667 = coord(1/6)
    
    Source
    Inhaltserschließung von Massendaten zur Wirksamkeit informationslinguistischer Verfahren am Beispiel des Deutschen Patentinformationssystems. Hrsg. J. Krause