Search (103 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07991685 = product of:
      0.1598337 = sum of:
        0.03912406 = product of:
          0.11737218 = sum of:
            0.11737218 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.11737218 = score(doc=562,freq=2.0), product of:
                0.2088406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.024633206 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.11737218 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11737218 = score(doc=562,freq=2.0), product of:
            0.2088406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024633206 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.0033374594 = product of:
          0.020024756 = sum of:
            0.020024756 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.020024756 = score(doc=562,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.16666667 = coord(1/6)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.052165415 = product of:
      0.15649624 = sum of:
        0.03912406 = product of:
          0.11737218 = sum of:
            0.11737218 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.11737218 = score(doc=862,freq=2.0), product of:
                0.2088406 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.024633206 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.11737218 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.11737218 = score(doc=862,freq=2.0), product of:
            0.2088406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024633206 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.04
    0.040236548 = product of:
      0.120709635 = sum of:
        0.11737218 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.11737218 = score(doc=563,freq=2.0), product of:
            0.2088406 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024633206 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0033374594 = product of:
          0.020024756 = sum of:
            0.020024756 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.020024756 = score(doc=563,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Deutsche Forschungsgemeinschaft: Stellungnahme des Präsidiums der Deutschen Forschungsgemeinschaft (DFG) zum Einfluss generativer Modelle für die Text- und Bilderstellung auf die Wissenschaften und das Förderhandeln der DFG (2023) 0.02
    0.021279894 = product of:
      0.12767936 = sum of:
        0.12767936 = weight(_text_:forschungsgemeinschaft in 991) [ClassicSimilarity], result of:
          0.12767936 = score(doc=991,freq=4.0), product of:
            0.15862288 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.024633206 = queryNorm
            0.804924 = fieldWeight in 991, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0625 = fieldNorm(doc=991)
      0.16666667 = coord(1/6)
    
  5. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.01
    0.0065831314 = product of:
      0.039498787 = sum of:
        0.039498787 = weight(_text_:forschungsgemeinschaft in 5089) [ClassicSimilarity], result of:
          0.039498787 = score(doc=5089,freq=2.0), product of:
            0.15862288 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.024633206 = queryNorm
            0.24901065 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.16666667 = coord(1/6)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
  6. Winterschladen, S.; Gurevych, I.: ¬Die perfekte Suchmaschine : Forschungsgruppe entwickelt ein System, das artverwandte Begriffe finden soll (2006) 0.01
    0.0065831314 = product of:
      0.039498787 = sum of:
        0.039498787 = weight(_text_:forschungsgemeinschaft in 5912) [ClassicSimilarity], result of:
          0.039498787 = score(doc=5912,freq=2.0), product of:
            0.15862288 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.024633206 = queryNorm
            0.24901065 = fieldWeight in 5912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5912)
      0.16666667 = coord(1/6)
    
    Content
    "KÖLNER STADT-ANZEIGER: Frau Gurevych, Sie entwickeln eine Suchmaschine der nächsten Generation? Wie kann man sich diese vorstellen? IRYNA GUREVYCH Jeder kennt die herkömmlichen Suchmaschinen wie Google, Yahoo oder Altavista. Diese sind aber nicht perfekt, weil sie nur nach dem Prinzip der Zeichenerkennung funktionieren. Das steigende Informationsbedürfnis können herkömmliche Suchmaschinen nicht befriedigen. KStA: Wieso nicht? GUREVYCH Nehmen wir mal ein konkretes Beispiel: Sie suchen bei Google nach einem Rezept für einen Kuchen, der aber kein Obst enthalten soll. Keine Suchmaschine der Welt kann bisher sinnvoll solche oder ähnliche Anfragen ausführen. Meistens kommen Tausende von Ergebnissen, in denen der Nutzer die relevanten Informationen wie eine Nadel im Heuhaufen suchen muss. KStA: Und Sie können dieses Problem lösen? GUREVYCH Wir entwickeln eine Suchmaschine, die sich nicht nur auf das System der Zeichenerkennung verlässt, sondern auch linguistische Merkmale nutzt. Unsere Suchmaschine soll also auch artverwandte Begriffe zeigen. KStA: Wie weit sind Sie mit Ihrer Forschung? GUREVYCH Das Projekt ist auf zwei Jahre angelegt. Wir haben vor einem halben Jahr begonnen, haben also noch einen großen Teil vor uns. Trotzdem sind die ersten Zwischenergebnisse schon sehr beachtlich. KStA: Und wann geht die Suchmaschine ins Internet? GUREVYCH Da es sich um ein Projekt der Deutschen Forschungsgemeinschaft handelt, wird die Suchmaschine vorerst nicht veröffentlicht. Wir sehen es als unsere Aufgabe an, Verbesserungsmöglichkeiten durch schlaue Such-Algorithmen mit unseren Forschungsarbeiten nachzuweisen und Fehler der bekannten Suchmaschinen zu beseitigen. Und da sind wir auf einem guten Weg. KStA: Arbeiten Sie auch an einem ganz speziellen Projekt? GUREVYCH Ja, ihre erste Bewährungsprobe muss die neue Technologie auf einem auf den ersten Blick ungewöhnlichen Feld bestehen: Unsere Forschungsgruppe an der Technischen Universität Darmstadt entwickelt derzeit ein neuartiges System zur Unterstützung Jugendlicher bei der Berufsauswahl. Dazu stellt uns die Bundesagentur für Arbeit die Beschreibungen von 5800 Berufen in Deutschland zur Verfügung. KStA: Und was sollen Sie dann mit diesen konkreten Informationen machen? GUREVYCH Jugendliche sollen unsere Suchmaschine mit einem Aufsatz über ihre beruflichen Vorlieben flittern. Das System soll dann eine Suchabfrage starten und mögliche Berufe anhand des Interesses des Jugendlichen heraussuchen. Die persönliche Beratung durch die Bundesagentur für Arbeit kann dadurch auf alternative Angebote ausgeweitet werden. Ein erster Prototyp soll Ende des Jahres bereitstehen. KStA: Es geht also zunächst einmal nicht darum, einen Jobfür den Jugendlichen zu finden, sondern den perfekten Beruf für ihn zu ermitteln? GUREVYCH Ja, anhand der Beschreibung des Jugendlichen startet die Suchmaschine eine semantische Abfrage und sucht den passenden Beruf heraus. KStA: Gab es schon weitere Anfragen seitens der Industrie? GUREVYCH Nein, wir haben bisher noch keine Werbung betrieben. Meine Erfahrung zeigt, dass angesehene Kongresse die beste Plattform sind, um die Ergebnisse zu präsentieren und auf sich aufmerksam zu machen. Einige erste Veröffentlichungen sind bereits unterwegs und werden 2006 noch erscheinen. KStA: Wie sieht denn Ihrer Meinung nach die Suchmaschine der Zukunft aus? GUREVYCH Suchmaschinen werden immer spezieller. Das heißt, dass es etwa in der Medizin, bei den Krankenkassen oder im Sport eigene Suchmaschinen geben wird. Außerdem wird die Tendenz verstärkt zu linguistischen Suchmaschinen gehen, die nach artverwandten Begriffen fahnden. Die perfekte Suchmaschine wird wohl eine Kombination aus statistischem und linguistisch-semantischem Suchverhalten sein. Algorithmen, die wir am Fachgebiet Telekooperation an der TU Darmstadt entwickeln, werden für den nächsten qualitativen Sprung bei der Entwicklung der Suchmaschinen von größter Bedeutung sein."
  7. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.01
    0.0057311654 = product of:
      0.017193496 = sum of:
        0.014412279 = product of:
          0.043236837 = sum of:
            0.043236837 = weight(_text_:f in 190) [ClassicSimilarity], result of:
              0.043236837 = score(doc=190,freq=8.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.4403713 = fieldWeight in 190, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.33333334 = coord(1/3)
        0.0027812165 = product of:
          0.016687298 = sum of:
            0.016687298 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.016687298 = score(doc=190,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Classification
    Spr F 510
    Spr F 87 / Lexikographie
    Date
    14. 4.2007 10:04:22
    SBB
    Spr F 510
    Spr F 87 / Lexikographie
  8. Rieger, F.: Lügende Computer (2023) 0.01
    0.0053265905 = product of:
      0.01597977 = sum of:
        0.011529824 = product of:
          0.03458947 = sum of:
            0.03458947 = weight(_text_:f in 912) [ClassicSimilarity], result of:
              0.03458947 = score(doc=912,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.35229704 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.33333334 = coord(1/3)
        0.0044499463 = product of:
          0.026699677 = sum of:
            0.026699677 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.026699677 = score(doc=912,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Date
    16. 3.2023 19:22:55
  9. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.00
    0.004660766 = product of:
      0.013982298 = sum of:
        0.010088596 = product of:
          0.030265786 = sum of:
            0.030265786 = weight(_text_:f in 156) [ClassicSimilarity], result of:
              0.030265786 = score(doc=156,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.3082599 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
        0.0038937028 = product of:
          0.023362216 = sum of:
            0.023362216 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.023362216 = score(doc=156,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  10. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.00
    0.004076408 = product of:
      0.02445845 = sum of:
        0.02445845 = product of:
          0.073375344 = sum of:
            0.073375344 = weight(_text_:f in 733) [ClassicSimilarity], result of:
              0.073375344 = score(doc=733,freq=4.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.74733484 = fieldWeight in 733, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  11. Latzer, F.-M.: Yo Computa! (1997) 0.00
    0.0033628652 = product of:
      0.020177191 = sum of:
        0.020177191 = product of:
          0.06053157 = sum of:
            0.06053157 = weight(_text_:f in 6005) [ClassicSimilarity], result of:
              0.06053157 = score(doc=6005,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.6165198 = fieldWeight in 6005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6005)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  12. Blanchon, E.: Terminology software : pt.1.2 (1995) 0.00
    0.0033628652 = product of:
      0.020177191 = sum of:
        0.020177191 = product of:
          0.06053157 = sum of:
            0.06053157 = weight(_text_:f in 6408) [ClassicSimilarity], result of:
              0.06053157 = score(doc=6408,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.6165198 = fieldWeight in 6408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6408)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Language
    f
  13. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.00
    0.0033628652 = product of:
      0.020177191 = sum of:
        0.020177191 = product of:
          0.06053157 = sum of:
            0.06053157 = weight(_text_:f in 6445) [ClassicSimilarity], result of:
              0.06053157 = score(doc=6445,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.6165198 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  14. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.00
    0.0033291187 = product of:
      0.009987356 = sum of:
        0.0072061396 = product of:
          0.021618418 = sum of:
            0.021618418 = weight(_text_:f in 1171) [ClassicSimilarity], result of:
              0.021618418 = score(doc=1171,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.22018565 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
        0.0027812165 = product of:
          0.016687298 = sum of:
            0.016687298 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.016687298 = score(doc=1171,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Date
    23.11.2023 19:07:22
  15. Hull, D.; Ait-Mokhtar, S.; Chuat, M.; Eisele, A.; Gaussier, E.; Grefenstette, G.; Isabelle, P.; Samulesson, C.; Segand, F.: Language technologies and patent search and classification (2001) 0.00
    0.002882456 = product of:
      0.017294735 = sum of:
        0.017294735 = product of:
          0.051884204 = sum of:
            0.051884204 = weight(_text_:f in 6318) [ClassicSimilarity], result of:
              0.051884204 = score(doc=6318,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.52844554 = fieldWeight in 6318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6318)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  16. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.00
    0.002882456 = product of:
      0.017294735 = sum of:
        0.017294735 = product of:
          0.051884204 = sum of:
            0.051884204 = weight(_text_:f in 6441) [ClassicSimilarity], result of:
              0.051884204 = score(doc=6441,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.52844554 = fieldWeight in 6441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6441)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  17. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0026632953 = product of:
      0.007989885 = sum of:
        0.005764912 = product of:
          0.017294735 = sum of:
            0.017294735 = weight(_text_:f in 4217) [ClassicSimilarity], result of:
              0.017294735 = score(doc=4217,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.17614852 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
        0.0022249732 = product of:
          0.013349839 = sum of:
            0.013349839 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.013349839 = score(doc=4217,freq=2.0), product of:
                0.086261295 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024633206 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.16666667 = coord(1/6)
      0.33333334 = coord(2/6)
    
    Date
    22. 1.2018 11:32:44
  18. Wenzel, F.: Semantische Eingrenzung im Freitext-Retrieval auf der Basis morphologischer Segmentierungen (1980) 0.00
    0.0024020467 = product of:
      0.014412279 = sum of:
        0.014412279 = product of:
          0.043236837 = sum of:
            0.043236837 = weight(_text_:f in 2037) [ClassicSimilarity], result of:
              0.043236837 = score(doc=2037,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.4403713 = fieldWeight in 2037, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2037)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
  19. Chibout, K.; Vilnat, A.: Primitive sémantiques, classification des verbes et polysémie (1999) 0.00
    0.0024020467 = product of:
      0.014412279 = sum of:
        0.014412279 = product of:
          0.043236837 = sum of:
            0.043236837 = weight(_text_:f in 6229) [ClassicSimilarity], result of:
              0.043236837 = score(doc=6229,freq=2.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.4403713 = fieldWeight in 6229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Language
    f
  20. Rosemblat, G.; Tse, T.; Gemoets, D.: Adapting a monolingual consumer health system for Spanish cross-language information retrieval (2004) 0.00
    0.0024020467 = product of:
      0.014412279 = sum of:
        0.014412279 = product of:
          0.043236837 = sum of:
            0.043236837 = weight(_text_:f in 2673) [ClassicSimilarity], result of:
              0.043236837 = score(doc=2673,freq=8.0), product of:
                0.098182686 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.024633206 = queryNorm
                0.4403713 = fieldWeight in 2673, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2673)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Abstract
    This preliminary study applies a bilingual term list (BTL) approach to cross-language information retrieval (CLIR) in the consumer health domain and compares it to a machine translation (MT) approach. We compiled a Spanish-English BTL of 34,980 medical and general terms. We collected a training set of 466 general health queries from MedlinePlus en espaiiol and 488 domainspecific queries from ClinicalTrials.gov translated into Spanish. We submitted the training set queries in English against a test bed of 7,170 ClinicalTrials.gov English documents, and compared MT and BTL against this English monolingual standard. The BTL approach was less effective (F = 0.420) than the MT approach (F = 0.578). A failure analysis of the results led to substitution of BTL dictionary sources and the addition of rudimentary normalisation of plural forms. These changes improved the CLIR effectiveness of the same training set queries (F = 0.474), and yielded comparable results for a test set of new 954 queries (F= 0.484). These results will shape our efforts to support Spanishspeakers' needs for consumer health information currently only available in English.

Years

Languages

  • e 70
  • d 25
  • f 6
  • m 2
  • More… Less…

Types

  • a 82
  • el 9
  • m 9
  • s 8
  • x 3
  • p 2
  • d 1
  • More… Less…