Search (68 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.58
    0.5807156 = product of:
      0.69685876 = sum of:
        0.030773548 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.030773548 = score(doc=4483,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.14684197 = weight(_text_:anwendung in 4483) [ClassicSimilarity], result of:
          0.14684197 = score(doc=4483,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.6418954 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.04804372 = weight(_text_:des in 4483) [ClassicSimilarity], result of:
          0.04804372 = score(doc=4483,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.36716178 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.20518874 = weight(_text_:prinzips in 4483) [ClassicSimilarity], result of:
          0.20518874 = score(doc=4483,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.75878 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.26601076 = sum of:
          0.18918902 = weight(_text_:thesaurus in 4483) [ClassicSimilarity], result of:
            0.18918902 = score(doc=4483,freq=4.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.8664522 = fieldWeight in 4483, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.07682176 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.07682176 = score(doc=4483,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.8333333 = coord(5/6)
    
    Date
    15. 3.2000 10:22:37
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.38
    0.3768447 = product of:
      0.45221364 = sum of:
        0.017951237 = weight(_text_:und in 156) [ClassicSimilarity], result of:
          0.017951237 = score(doc=156,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.17141339 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.08565781 = weight(_text_:anwendung in 156) [ClassicSimilarity], result of:
          0.08565781 = score(doc=156,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.37443897 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.028025504 = weight(_text_:des in 156) [ClassicSimilarity], result of:
          0.028025504 = score(doc=156,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.2141777 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.11969343 = weight(_text_:prinzips in 156) [ClassicSimilarity], result of:
          0.11969343 = score(doc=156,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.44262168 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.20088567 = sum of:
          0.15607297 = weight(_text_:thesaurus in 156) [ClassicSimilarity], result of:
            0.15607297 = score(doc=156,freq=8.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.7147866 = fieldWeight in 156, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.044812694 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.044812694 = score(doc=156,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.8333333 = coord(5/6)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.22
    0.21893436 = product of:
      0.26272124 = sum of:
        0.015386774 = weight(_text_:und in 7862) [ClassicSimilarity], result of:
          0.015386774 = score(doc=7862,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.14692576 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.07342099 = weight(_text_:anwendung in 7862) [ClassicSimilarity], result of:
          0.07342099 = score(doc=7862,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.3209477 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.02402186 = weight(_text_:des in 7862) [ClassicSimilarity], result of:
          0.02402186 = score(doc=7862,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.18358089 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.10259437 = weight(_text_:prinzips in 7862) [ClassicSimilarity], result of:
          0.10259437 = score(doc=7862,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.37939 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.047297254 = product of:
          0.09459451 = sum of:
            0.09459451 = weight(_text_:thesaurus in 7862) [ClassicSimilarity], result of:
              0.09459451 = score(doc=7862,freq=4.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.4332261 = fieldWeight in 7862, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7862)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.20
    0.19605027 = product of:
      0.23526034 = sum of:
        0.012822312 = weight(_text_:und in 5226) [ClassicSimilarity], result of:
          0.012822312 = score(doc=5226,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.12243814 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.061184157 = weight(_text_:anwendung in 5226) [ClassicSimilarity], result of:
          0.061184157 = score(doc=5226,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.2674564 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.020018218 = weight(_text_:des in 5226) [ClassicSimilarity], result of:
          0.020018218 = score(doc=5226,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.15298408 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.08549531 = weight(_text_:prinzips in 5226) [ClassicSimilarity], result of:
          0.08549531 = score(doc=5226,freq=2.0), product of:
            0.27041927 = queryWeight, product of:
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.04725067 = queryNorm
            0.31615835 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.723078 = idf(docFreq=392, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.055740345 = product of:
          0.11148069 = sum of:
            0.11148069 = weight(_text_:thesaurus in 5226) [ClassicSimilarity], result of:
              0.11148069 = score(doc=5226,freq=8.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.5105618 = fieldWeight in 5226, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.5 = coord(1/2)
      0.8333333 = coord(5/6)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.04
    0.044541914 = product of:
      0.08908383 = sum of:
        0.02551608 = weight(_text_:und in 3578) [ClassicSimilarity], result of:
          0.02551608 = score(doc=3578,freq=22.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.24364883 = fieldWeight in 3578, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.036710493 = weight(_text_:anwendung in 3578) [ClassicSimilarity], result of:
          0.036710493 = score(doc=3578,freq=2.0), product of:
            0.22876309 = queryWeight, product of:
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.04725067 = queryNorm
            0.16047385 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8414783 = idf(docFreq=948, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.026857255 = weight(_text_:des in 3578) [ClassicSimilarity], result of:
          0.026857255 = score(doc=3578,freq=10.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.20524967 = fieldWeight in 3578, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.5 = coord(3/6)
    
    Abstract
    Die Sprachtechnologie wird mobil. Sprachtechnologische Anwendungen begegnen uns zunehmend außerhalb des Büros oder der eigenen vier Wände. Mit gesprochener Sprache steuern Benutzer ihre Mobiltelefone, fragen Datenbanken ab oder führen Geschäftsvorgänge durch. In diesen Bereichen finden eklektisch sprachwissenschaftliche Modelle Anwendung, vor allem solche, die auf linguistischen Ressourcen - wie Wortnetzen oder Ontologien - trainiert werden müssen, aber auch Modelle der Dialog-Repräsentation und -Struktur wie etwa des Turn Taking. Dieser Tagungsband vereint die Beiträge zum Hauptprogramm der Jahrestagung 2005 der Gesellschaftfür Linguistische Datenverarbeitung (GLDV), zu den Workshops GermaNetHund Turn Taking sowie die Beiträge zum GLDV Preis 2005 für die beste Abschlussarbeit.
    Content
    INHALT: Chris Biemann/Rainer Osswald: Automatische Erweiterung eines semantikbasierten Lexikons durch Bootstrapping auf großen Korpora - Ernesto William De Luca/Andreas Nürnberger: Supporting Mobile Web Search by Ontology-based Categorization - Rüdiger Gleim: HyGraph - Ein Framework zur Extraktion, Repräsentation und Analyse webbasierter Hypertextstrukturen - Felicitas Haas/Bernhard Schröder: Freges Grundgesetze der Arithmetik: Dokumentbaum und Formelwald - Ulrich Held/ Andre Blessing/Bettina Säuberlich/Jürgen Sienel/Horst Rößler/Dieter Kopp: A personalized multimodal news service -Jürgen Hermes/Christoph Benden: Fusion von Annotation und Präprozessierung als Vorschlag zur Behebung des Rohtextproblems - Sonja Hüwel/Britta Wrede/Gerhard Sagerer: Semantisches Parsing mit Frames für robuste multimodale Mensch-Maschine-Kommunikation - Brigitte Krenn/Stefan Evert: Separating the wheat from the chaff- Corpus-driven evaluation of statistical association measures for collocation extraction - Jörn Kreutel: An application-centered Perspective an Multimodal Dialogue Systems - Jonas Kuhn: An Architecture for Prallel Corpusbased Grammar Learning - Thomas Mandl/Rene Schneider/Pia Schnetzler/Christa Womser-Hacker: Evaluierung von Systemen für die Eigennamenerkennung im crosslingualen Information Retrieval - Alexander Mehler/Matthias Dehmer/Rüdiger Gleim: Zur Automatischen Klassifikation von Webgenres - Charlotte Merz/Martin Volk: Requirements for a Parallel Treebank Search Tool - Sally YK. Mok: Multilingual Text Retrieval an the Web: The Case of a Cantonese-Dagaare-English Trilingual e-Lexicon -
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
    Karel Pala: The Balkanet Experience - Peter M. Kruse/Andre Nauloks/Dietmar Rösner/Manuela Kunze: Clever Search: A WordNet Based Wrapper for Internet Search Engines - Rosmary Stegmann/Wolfgang Woerndl: Using GermaNet to Generate Individual Customer Profiles - Ingo Glöckner/Sven Hartrumpf/Rainer Osswald: From GermaNet Glosses to Formal Meaning Postulates -Aljoscha Burchardt/ Katrin Erk/Anette Frank: A WordNet Detour to FrameNet - Daniel Naber: OpenThesaurus: ein offenes deutsches Wortnetz - Anke Holler/Wolfgang Grund/Heinrich Petith: Maschinelle Generierung assoziativer Termnetze für die Dokumentensuche - Stefan Bordag/Hans Friedrich Witschel/Thomas Wittig: Evaluation of Lexical Acquisition Algorithms - Iryna Gurevych/Hendrik Niederlich: Computing Semantic Relatedness of GermaNet Concepts - Roland Hausser: Turn-taking als kognitive Grundmechanik der Datenbanksemantik - Rodolfo Delmonte: Parsing Overlaps - Melanie Twiggs: Behandlung des Passivs im Rahmen der Datenbanksemantik- Sandra Hohmann: Intention und Interaktion - Anmerkungen zur Relevanz der Benutzerabsicht - Doris Helfenbein: Verwendung von Pronomina im Sprecher- und Hörmodus - Bayan Abu Shawar/Eric Atwell: Modelling turn-taking in a corpus-trained chatbot - Barbara März: Die Koordination in der Datenbanksemantik - Jens Edlund/Mattias Heldner/Joakim Gustafsson: Utterance segmentation and turn-taking in spoken dialogue systems - Ekaterina Buyko: Numerische Repräsentation von Textkorpora für Wissensextraktion - Bernhard Fisseni: ProofML - eine Annotationssprache für natürlichsprachliche mathematische Beweise - Iryna Schenk: Auflösung der Pronomen mit Nicht-NP-Antezedenten in spontansprachlichen Dialogen - Stephan Schwiebert: Entwurf eines agentengestützten Systems zur Paradigmenbildung - Ingmar Steiner: On the analysis of speech rhythm through acoustic parameters - Hans Friedrich Witschel: Text, Wörter, Morpheme - Möglichkeiten einer automatischen Terminologie-Extraktion.
    Series
    Sprache, Sprechen und Computer. Bd. 8
  6. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.04094973 = product of:
      0.24569836 = sum of:
        0.24569836 = sum of:
          0.15607297 = weight(_text_:thesaurus in 4506) [ClassicSimilarity], result of:
            0.15607297 = score(doc=4506,freq=2.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.7147866 = fieldWeight in 4506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
          0.08962539 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
            0.08962539 = score(doc=4506,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.5416616 = fieldWeight in 4506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
      0.16666667 = coord(1/6)
    
    Date
    8.10.2000 11:52:22
  7. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.03
    0.033480946 = product of:
      0.20088567 = sum of:
        0.20088567 = sum of:
          0.15607297 = weight(_text_:thesaurus in 1361) [ClassicSimilarity], result of:
            0.15607297 = score(doc=1361,freq=8.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.7147866 = fieldWeight in 1361, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.044812694 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.044812694 = score(doc=1361,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.16666667 = coord(1/6)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  8. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.031417347 = product of:
      0.094252035 = sum of:
        0.07504659 = product of:
          0.22513977 = sum of:
            0.22513977 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22513977 = score(doc=562,freq=2.0), product of:
                0.4005917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04725067 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.01920544 = product of:
          0.03841088 = sum of:
            0.03841088 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03841088 = score(doc=562,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  9. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.03
    0.026345506 = product of:
      0.07903652 = sum of:
        0.047007374 = weight(_text_:und in 5218) [ClassicSimilarity], result of:
          0.047007374 = score(doc=5218,freq=42.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.44886562 = fieldWeight in 5218, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.03202915 = weight(_text_:des in 5218) [ClassicSimilarity], result of:
          0.03202915 = score(doc=5218,freq=8.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.24477452 = fieldWeight in 5218, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
      0.33333334 = coord(2/6)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  10. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.03
    0.02517884 = product of:
      0.07553652 = sum of:
        0.035902474 = weight(_text_:und in 3510) [ClassicSimilarity], result of:
          0.035902474 = score(doc=3510,freq=8.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.34282678 = fieldWeight in 3510, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.03963405 = weight(_text_:des in 3510) [ClassicSimilarity], result of:
          0.03963405 = score(doc=3510,freq=4.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.302893 = fieldWeight in 3510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
      0.33333334 = coord(2/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  11. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.022127768 = product of:
      0.1327666 = sum of:
        0.1327666 = sum of:
          0.110360265 = weight(_text_:thesaurus in 1616) [ClassicSimilarity], result of:
            0.110360265 = score(doc=1616,freq=16.0), product of:
              0.21834905 = queryWeight, product of:
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.04725067 = queryNorm
              0.50543046 = fieldWeight in 1616, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                4.6210785 = idf(docFreq=1182, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.022406347 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.022406347 = score(doc=1616,freq=2.0), product of:
              0.16546379 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04725067 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
      0.16666667 = coord(1/6)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  12. Feldman, S.: Find what I mean, not what I say : meaning-based search tools (2000) 0.02
    0.021893688 = product of:
      0.06568106 = sum of:
        0.025644625 = weight(_text_:und in 4799) [ClassicSimilarity], result of:
          0.025644625 = score(doc=4799,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.24487628 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
        0.040036436 = weight(_text_:des in 4799) [ClassicSimilarity], result of:
          0.040036436 = score(doc=4799,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.30596817 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
      0.33333334 = coord(2/6)
    
    Abstract
    Bericht über computerlinguistische Verfahren, die bei verschiedenen Suchdiensten des Internet eingesetzt werden
    Content
    Mit einer Zusammenstellung von Adressen und einer tabellarischen Übersicht der eingesetzten linguistischen Tools
  13. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.02
    0.021893688 = product of:
      0.06568106 = sum of:
        0.025644625 = weight(_text_:und in 1810) [ClassicSimilarity], result of:
          0.025644625 = score(doc=1810,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.24487628 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.040036436 = weight(_text_:des in 1810) [ClassicSimilarity], result of:
          0.040036436 = score(doc=1810,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.30596817 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
      0.33333334 = coord(2/6)
    
    Content
    Trägerin des VFI-Dissertationspreises 2014: "Überzeugende gründliche linguistische und quantitative Analyse eines im Information Retrieval bisher wenig beachteten Textelementes anhand eines eigens erstellten grossen Hypertextkorpus, einschliesslich der Evaluation selbsterstellter Auflösungsregeln für die Nutzung in künftigen IR-Systemen.".
  14. Salton, G.: Automatic processing of foreign language documents (1985) 0.02
    0.018210873 = product of:
      0.05463262 = sum of:
        0.016014574 = weight(_text_:des in 3650) [ClassicSimilarity], result of:
          0.016014574 = score(doc=3650,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.12238726 = fieldWeight in 3650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=3650)
        0.038618047 = product of:
          0.07723609 = sum of:
            0.07723609 = weight(_text_:thesaurus in 3650) [ClassicSimilarity], result of:
              0.07723609 = score(doc=3650,freq=6.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.35372764 = fieldWeight in 3650, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3650)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The attempt to computerize a process, such as indexing, abstracting, classifying, or retrieving information, begins with an analysis of the process into its intellectual and nonintellectual components. That part of the process which is amenable to computerization is mechanical or algorithmic. What is not is intellectual or creative and requires human intervention. Gerard Salton has been an innovator, experimenter, and promoter in the area of mechanized information systems since the early 1960s. He has been particularly ingenious at analyzing the process of information retrieval into its algorithmic components. He received a doctorate in applied mathematics from Harvard University before moving to the computer science department at Cornell, where he developed a prototype automatic retrieval system called SMART. Working with this system he and his students contributed for over a decade to our theoretical understanding of the retrieval process. On a more practical level, they have contributed design criteria for operating retrieval systems. The following selection presents one of the early descriptions of the SMART system; it is valuable as it shows the direction automatic retrieval methods were to take beyond simple word-matching techniques. These include various word normalization techniques to improve recall, for instance, the separation of words into stems and affixes; the correlation and clustering, using statistical association measures, of related terms; and the identification, using a concept thesaurus, of synonymous, broader, narrower, and sibling terms. They include, as weIl, techniques, both linguistic and statistical, to deal with the thorny problem of how to automatically extract from texts index terms that consist of more than one word. They include weighting techniques and various documentrequest matching algorithms. Significant among the latter are those which produce a retrieval output of citations ranked in relevante order. During the 1970s, Salton and his students went an to further refine these various techniques, particularly the weighting and statistical association measures. Many of their early innovations seem commonplace today. Some of their later techniques are still ahead of their time and await technological developments for implementation. The particular focus of the selection that follows is an the evaluation of a particular component of the SMART system, a multilingual thesaurus. By mapping English language expressions and their German equivalents to a common concept number, the thesaurus permitted the automatic processing of German language documents against English language queries and vice versa. The results of the evaluation, as it turned out, were somewhat inconclusive. However, this SMART experiment suggested in a bold and optimistic way how one might proceed to answer such complex questions as What is meant by retrieval language compatability? How it is to be achieved, and how evaluated?
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
  15. Zhang, X: Rough set theory based automatic text categorization (2005) 0.02
    0.01751495 = product of:
      0.052544847 = sum of:
        0.0205157 = weight(_text_:und in 2822) [ClassicSimilarity], result of:
          0.0205157 = score(doc=2822,freq=2.0), product of:
            0.104724824 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.04725067 = queryNorm
            0.19590102 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
        0.03202915 = weight(_text_:des in 2822) [ClassicSimilarity], result of:
          0.03202915 = score(doc=2822,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.24477452 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
      0.33333334 = coord(2/6)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  16. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.02
    0.016810618 = product of:
      0.05043185 = sum of:
        0.028025504 = weight(_text_:des in 5483) [ClassicSimilarity], result of:
          0.028025504 = score(doc=5483,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.2141777 = fieldWeight in 5483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.022406347 = product of:
          0.044812694 = sum of:
            0.044812694 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.044812694 = score(doc=5483,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  17. Grefenstette, G.: Explorations in automatic thesaurus discovery (1994) 0.01
    0.013006082 = product of:
      0.07803649 = sum of:
        0.07803649 = product of:
          0.15607297 = sum of:
            0.15607297 = weight(_text_:thesaurus in 170) [ClassicSimilarity], result of:
              0.15607297 = score(doc=170,freq=8.0), product of:
                0.21834905 = queryWeight, product of:
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.04725067 = queryNorm
                0.7147866 = fieldWeight in 170, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.6210785 = idf(docFreq=1182, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=170)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Review of various approaches to automatic thesaurus formation and presentation of the SEXTANT system to analyse text and to determine the basic syntactic contexts for words. Presents an automated method for creating a first-draft thesaurus from raw text. It describes natural processing steps of tokenization, surface syntactic analysis, and syntactic attribute extraction. From these attributes, word and term similarity is calculated and a thesaurus is created showing important common terms and their relation to each other, common verb-noun pairings, common expressions, and word family members
  18. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.01
    0.012507766 = product of:
      0.07504659 = sum of:
        0.07504659 = product of:
          0.22513977 = sum of:
            0.22513977 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22513977 = score(doc=862,freq=2.0), product of:
                0.4005917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04725067 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  19. Notess, G.R.: Up and coming search technologies (2000) 0.01
    0.009341835 = product of:
      0.05605101 = sum of:
        0.05605101 = weight(_text_:des in 5467) [ClassicSimilarity], result of:
          0.05605101 = score(doc=5467,freq=2.0), product of:
            0.13085164 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.04725067 = queryNorm
            0.4283554 = fieldWeight in 5467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.109375 = fieldNorm(doc=5467)
      0.16666667 = coord(1/6)
    
    Abstract
    Kolumnenartikel zu Trends bei den Suchdiensten des Internet
  20. Warner, A.J.: Natural language processing (1987) 0.01
    0.008535751 = product of:
      0.051214505 = sum of:
        0.051214505 = product of:
          0.10242901 = sum of:
            0.10242901 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10242901 = score(doc=337,freq=2.0), product of:
                0.16546379 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04725067 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108

Years

Languages

Types

  • a 50
  • m 10
  • s 5
  • p 2
  • x 2
  • el 1
  • More… Less…