Search (104 results, page 5 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Wright, S.E.: Leveraging terminology resources across application boundaries : accessing resources in future integrated environments (2000) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 5528) [ClassicSimilarity], result of:
              0.03217344 = score(doc=5528,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 5528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5528)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The title for this conference, stated in English, is Language Technology for a Dynamic Economy - y in the Media Age - The question arises as to what the media are we are dealing with and to what extent we are moving away from tile reality of different media to a world in which all sub-categories flow together into a unified stream of information that is constantly resealed to appear in different hardware configurations. A few years ago, people who were interested in sharing data or getting different electronic "boxes" to talk to each other were focused on two major aspects: I ) developing data conversion technology, and 2) convincing potential users that sharing information was an even remotely interesting option. Although some content "owners" are still reticent about releasing their data, it has become dramatically apparent in the Web environment that a broad range of users does indeed want this technology. Even as researchers struggle with the remaining technical, legal, and ethical impediments that stand in the way of unlimited information access to existing multi-platform resources, the future view of the world will no longer be as obsessed with conversion capability as it will be with creating content, with ,in eye to morphing technologies that will enable the delivery of that content from ail open-standards-based format such as XML (eXtensibic Markup Language), MPEG (Moving Picture Experts Group), or WAP (Wireless Application Protocol) to a rich variety of display Options
  2. Niemi, T.; Jämsen, J.: ¬A query language for discovering semantic associations, part II : sample queries and query evaluation (2007) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 580) [ClassicSimilarity], result of:
              0.03217344 = score(doc=580,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In our query language introduced in Part I (Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1559-1568) the user can formulate queries to find out (possibly complex) semantic relationships among entities. In this article we demonstrate the usage of our query language and discuss the new applications that it supports. We categorize several query types and give sample queries. The query types are categorized based on whether the entities specified in a query are known or unknown to the user in advance, and whether text information in documents is utilized. Natural language is used to represent the results of queries in order to facilitate correct interpretation by the user. We discuss briefly the issues related to the prototype implementation of the query language and show that an independent operation like Rho (Sheth et al., 2005; Anyanwu & Sheth, 2002, 2003), which presupposes entities of interest to be known in advance, is exceedingly inefficient in emulating the behavior of our query language. The discussion also covers potential problems, and challenges for future work.
  3. Niemi, T.; Jämsen , J.: ¬A query language for discovering semantic associations, part I : approach and formal definition of query primitives (2007) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 591) [ClassicSimilarity], result of:
              0.03217344 = score(doc=591,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 591, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=591)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 1338) [ClassicSimilarity], result of:
              0.03217344 = score(doc=1338,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 1338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Muneer, I.; Sharjeel, M.; Iqbal, M.; Adeel Nawab, R.M.; Rayson, P.: CLEU - A Cross-language english-urdu corpus and benchmark for text reuse experiments (2019) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 5299) [ClassicSimilarity], result of:
              0.03217344 = score(doc=5299,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 5299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.01
    0.00804336 = product of:
      0.01608672 = sum of:
        0.01608672 = product of:
          0.03217344 = sum of:
            0.03217344 = weight(_text_:i in 849) [ClassicSimilarity], result of:
              0.03217344 = score(doc=849,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20836058 = fieldWeight in 849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  7. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.01
    0.00796252 = product of:
      0.01592504 = sum of:
        0.01592504 = product of:
          0.03185008 = sum of:
            0.03185008 = weight(_text_:i in 2677) [ClassicSimilarity], result of:
              0.03185008 = score(doc=2677,freq=4.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20626646 = fieldWeight in 2677, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2677)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    1. Introduction The need for research into the application of linguistic techniques in Information Retrieval (IR) in general, and a similar need in faceted Knowledge Organization Systems (KOS) has been indicated by various authors. Smeaton (1997) points out the inherent limitations of conventional approaches to IR based an "bags of words", mainly difficulties caused by lexical ambiguity in the words concerned, and goes an to suggest the possibility of using Natural Language Processing (NLP) in query formulation. Past experience with a faceted retrieval system highlighted the need for integrating the linguistic perspective in order to fully utilise the potential of a KOS (Tudhope et al." 2002). The present research seeks to address some of these needs in using NLP to improve the efficacy of KOS tools in query and retrieval systems. Syntactic parsing and part-of-speech tagging can substantially reduce lexical ambiguity through homograph disambiguation. Given the two strings "1 fable the motion" and "I put the motion an the fable", for instance, the parser used in this research clearly indicates that 'fable' in the first string is a verb, while 'table' in the second string is a noun, a distinction that would be missed in the "bag of words" approach. This syntactic disambiguation enables a more precise matching from free text to the controlled vocabulary of a KOS and vice versa. The use of a general linguistic resource, namely Roget's Thesaurus of English Words and Phrases (RTEWP), as an intermediary in this process, is investigated. The adaptation of the Link parser (Sleator & Temperley, 1993) to the purposes of the research is reported. The design and implementation of the early practical stages of the project are described, and the results of the initial experiments are presented and evaluated. Applications of the techniques developed are foreseen in the areas of query disambiguation, information retrieval and automatic indexing. In the first section of the paper a brief review of the literature and relevant current work in the field is presented. The second section includes reports an the development of algorithms, the construction of data sets and theoretical and experimental work undertaken to date. The third section evaluates the results obtained, and outlines directions for future research.
  8. Hahn, U.: Informationslinguistik : II: Einführung in das linguistische Information Retrieval (1985) 0.01
    0.00796252 = product of:
      0.01592504 = sum of:
        0.01592504 = product of:
          0.03185008 = sum of:
            0.03185008 = weight(_text_:i in 3116) [ClassicSimilarity], result of:
              0.03185008 = score(doc=3116,freq=4.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.20626646 = fieldWeight in 3116, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3116)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    (1) "Informationslinguistik I: Einfuehrung in das linguistische Information Retrieval" (2) "Informationslinguistik II: linguistische und statistische Verfahren im experimentellen Information Retrieval" (3) "Intelligente Informationssysteme: Verfahren der Kuenstlichen Intelligenz im experimentellen Information Retrieval" Kursabschnitt zu natuerlichsprachlichen Systemen (4) Spezialkurse zum automatischen Uebersetzen, Indexing und Retrieval, Abstracting usf. dienen zur Vertiefung informationslinguistischer Spezialthemen Die Kurse (1) und (3) gehoeren zu dem Pool der Pflichtveranstaltungen aller Studenten des Diplom-Studiengangs Informationswissenschaft, waehrend (2) und (4) lediglich zu den Wahlpflichtveranstaltungen zaehlen, die aber obligatorisch fuer die Studenten des Diplomstudiengangs sind, die ihren Schwerpunkt (z.B. in Form der Diplomarbeit) im Bereich Informationslinguistik suchen - fuer alle anderen Studenten zaehlen diese Kurse zum Zusatz angebot an Lehrveranstaltungen.
    Das vorliegende Skript entspricht dem Inhalt des Kurses "Informationslinguistik II" im SS 1983 bzw. SS 1984. Es ist im Juli 1983 inhaltlich abgeschlossen und im Januar 1985 lediglich redaktionell ueberarbeitet worden. Die Erstellung des Skripts entspricht einem dezidierten Auftrag des Projekts "Informationsvermittlung", der die Entwicklung geeigneter Lehrmaterialien zum informationswissenschaftlichen Aufbaustudium vorsah. Aufgrund des engen Projektzeitrahmens (1982-84) kann das Skript nicht in dem Masse voll ausgereift und ausformuliert sein, wie es gaengigen Standards entspraeche. Im Unterschied zum Skript "Informationslinguistik I" (HAHN 1985) laesst das vorliegende Skript wahlweise eine eher methoden- oder mehr systembezogene Darstellung informationslinguistischer Konzepte des experimentellen Information Retrieval zu (beides zusammen schliesst der enge Zeitrahmen eines Sommersemesters ausl). Die Entscheidung darueber sollte wenn moeglich in Abhaengigkeit zur personellen Zusammensetzung des Kurses getroffen werden, wobei - sofern die bislang genachten Erfahrungen verallgemeinerbar sind - sich bei einem nicht ausschliesslich an einer informationslinguistischen Spezialisierung interessierten und damit heterogenen Publikum die mehr systembezogene Praesentation als fuer das Verstaendnis informationslinguistischer Fragestellungen und entsprechender Verfahrensloesungen guenstiger gezeigt hat. Innerhalb dieser Nuancierung besitzt aber auch dieses Skript schon eine akzeptable inhaltliche Stabilitaet. Nichtsdestotrotz sollte gerade die Veroeffentlichung des Skripts als Anregung dienen, kritische Kommentare, Anmerkungen und Ergaenzungen zu diesem curricularen Entwurf herauszufordern, um damit die weitere disziplinaere Klaerung der Informationslinguistik zu foerdern.
  9. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.01
    0.006933403 = product of:
      0.013866806 = sum of:
        0.013866806 = product of:
          0.027733613 = sum of:
            0.027733613 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
              0.027733613 = score(doc=5557,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.19345059 = fieldWeight in 5557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5557)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2000 13:22:17
  10. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.01
    0.006933403 = product of:
      0.013866806 = sum of:
        0.013866806 = product of:
          0.027733613 = sum of:
            0.027733613 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.027733613 = score(doc=734,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 7.2002 14:22:31
  11. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.01
    0.006933403 = product of:
      0.013866806 = sum of:
        0.013866806 = product of:
          0.027733613 = sum of:
            0.027733613 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.027733613 = score(doc=190,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 4.2007 10:04:22
  12. Fóris, A.: Network theory and terminology (2013) 0.01
    0.006933403 = product of:
      0.013866806 = sum of:
        0.013866806 = product of:
          0.027733613 = sum of:
            0.027733613 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.027733613 = score(doc=1365,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48
  13. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.006933403 = product of:
      0.013866806 = sum of:
        0.013866806 = product of:
          0.027733613 = sum of:
            0.027733613 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.027733613 = score(doc=1171,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23.11.2023 19:07:22
  14. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.01
    0.0064346883 = product of:
      0.012869377 = sum of:
        0.012869377 = product of:
          0.025738753 = sum of:
            0.025738753 = weight(_text_:i in 3670) [ClassicSimilarity], result of:
              0.025738753 = score(doc=3670,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.16668847 = fieldWeight in 3670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Bornkessel, I. u.a.
  15. Spitkovsky, V.; Norvig, P.: From words to concepts and back : dictionaries for linking text, entities and ideas (2012) 0.01
    0.0064346883 = product of:
      0.012869377 = sum of:
        0.012869377 = product of:
          0.025738753 = sum of:
            0.025738753 = weight(_text_:i in 337) [ClassicSimilarity], result of:
              0.025738753 = score(doc=337,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.16668847 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Human language is both rich and ambiguous. When we hear or read words, we resolve meanings to mental representations, for example recognizing and linking names to the intended persons, locations or organizations. Bridging words and meaning - from turning search queries into relevant results to suggesting targeted keywords for advertisers - is also Google's core competency, and important for many other tasks in information retrieval and natural language processing. We are happy to release a resource, spanning 7,560,141 concepts and 175,100,788 unique text strings, that we hope will help everyone working in these areas. How do we represent concepts? Our approach piggybacks on the unique titles of entries from an encyclopedia, which are mostly proper and common noun phrases. We consider each individual Wikipedia article as representing a concept (an entity or an idea), identified by its URL. Text strings that refer to concepts were collected using the publicly available hypertext of anchors (the text you click on in a web link) that point to each Wikipedia page, thus drawing on the vast link structure of the web. For every English article we harvested the strings associated with its incoming hyperlinks from the rest of Wikipedia, the greater web, and also anchors of parallel, non-English Wikipedia pages. Our dictionaries are cross-lingual, and any concept deemed too fine can be broadened to a desired level of generality using Wikipedia's groupings of articles into hierarchical categories. The data set contains triples, each consisting of (i) text, a short, raw natural language string; (ii) url, a related concept, represented by an English Wikipedia article's canonical location; and (iii) count, an integer indicating the number of times text has been observed connected with the concept's url. Our database thus includes weights that measure degrees of association. For example, the top two entries for football indicate that it is an ambiguous term, which is almost twice as likely to refer to what we in the US call soccer. Vgl. auch: Spitkovsky, V.I., A.X. Chang: A cross-lingual dictionary for english Wikipedia concepts. In: http://nlp.stanford.edu/pubs/crosswikis.pdf.
  16. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.01
    0.0064346883 = product of:
      0.012869377 = sum of:
        0.012869377 = product of:
          0.025738753 = sum of:
            0.025738753 = weight(_text_:i in 872) [ClassicSimilarity], result of:
              0.025738753 = score(doc=872,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.16668847 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Winterschladen, S.; Gurevych, I.: ¬Die perfekte Suchmaschine : Forschungsgruppe entwickelt ein System, das artverwandte Begriffe finden soll (2006) 0.01
    0.0056303525 = product of:
      0.011260705 = sum of:
        0.011260705 = product of:
          0.02252141 = sum of:
            0.02252141 = weight(_text_:i in 5912) [ClassicSimilarity], result of:
              0.02252141 = score(doc=5912,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.14585242 = fieldWeight in 5912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.01
    0.0056303525 = product of:
      0.011260705 = sum of:
        0.011260705 = product of:
          0.02252141 = sum of:
            0.02252141 = weight(_text_:i in 1536) [ClassicSimilarity], result of:
              0.02252141 = score(doc=1536,freq=2.0), product of:
                0.15441231 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04093939 = queryNorm
                0.14585242 = fieldWeight in 1536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.0055467226 = product of:
      0.011093445 = sum of:
        0.011093445 = product of:
          0.02218689 = sum of:
            0.02218689 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.02218689 = score(doc=4217,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44
  20. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.00
    0.0048533822 = product of:
      0.0097067645 = sum of:
        0.0097067645 = product of:
          0.019413529 = sum of:
            0.019413529 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.019413529 = score(doc=5759,freq=2.0), product of:
                0.14336278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04093939 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22

Years

Languages

Types

  • a 82
  • el 13
  • m 9
  • s 7
  • p 3
  • x 3
  • d 1
  • More… Less…