Search (164 results, page 1 of 9)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.02
    0.022981133 = product of:
      0.05745283 = sum of:
        0.050935313 = product of:
          0.15280594 = sum of:
            0.15280594 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.15280594 = score(doc=562,freq=2.0), product of:
                0.27188796 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03206978 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.006517519 = product of:
          0.026070075 = sum of:
            0.026070075 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.026070075 = score(doc=562,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.01
    0.010187062 = product of:
      0.050935313 = sum of:
        0.050935313 = product of:
          0.15280594 = sum of:
            0.15280594 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.15280594 = score(doc=862,freq=2.0), product of:
                0.27188796 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03206978 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.010007746 = product of:
      0.025019364 = sum of:
        0.021217478 = product of:
          0.06365243 = sum of:
            0.06365243 = weight(_text_:pr in 1616) [ClassicSimilarity], result of:
              0.06365243 = score(doc=1616,freq=2.0), product of:
                0.22975713 = queryWeight, product of:
                  7.1642876 = idf(docFreq=92, maxDocs=44218)
                  0.03206978 = queryNorm
                0.27704227 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.1642876 = idf(docFreq=92, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.33333334 = coord(1/3)
        0.003801886 = product of:
          0.015207544 = sum of:
            0.015207544 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.015207544 = score(doc=1616,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  4. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.01
    0.0096778 = product of:
      0.024194501 = sum of:
        0.018763235 = product of:
          0.056289703 = sum of:
            0.056289703 = weight(_text_:f in 190) [ClassicSimilarity], result of:
              0.056289703 = score(doc=190,freq=8.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.4403713 = fieldWeight in 190, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.33333334 = coord(1/3)
        0.0054312656 = product of:
          0.021725062 = sum of:
            0.021725062 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.021725062 = score(doc=190,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Classification
    Spr F 510
    Spr F 87 / Lexikographie
    Date
    14. 4.2007 10:04:22
    SBB
    Spr F 510
    Spr F 87 / Lexikographie
  5. Rieger, F.: Lügende Computer (2023) 0.01
    0.009480245 = product of:
      0.023700614 = sum of:
        0.015010588 = product of:
          0.045031764 = sum of:
            0.045031764 = weight(_text_:f in 912) [ClassicSimilarity], result of:
              0.045031764 = score(doc=912,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.35229704 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.33333334 = coord(1/3)
        0.008690025 = product of:
          0.0347601 = sum of:
            0.0347601 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.0347601 = score(doc=912,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    16. 3.2023 19:22:55
  6. Schröter, F.; Meyer, U.: Entwicklung sprachlicher Handlungskompetenz in Englisch mit Hilfe eines Multimedia-Sprachlernsystems (2000) 0.01
    0.0087254355 = product of:
      0.021813588 = sum of:
        0.011257941 = product of:
          0.03377382 = sum of:
            0.03377382 = weight(_text_:f in 5567) [ClassicSimilarity], result of:
              0.03377382 = score(doc=5567,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.26422277 = fieldWeight in 5567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5567)
          0.33333334 = coord(1/3)
        0.010555647 = product of:
          0.04222259 = sum of:
            0.04222259 = weight(_text_:einer in 5567) [ClassicSimilarity], result of:
              0.04222259 = score(doc=5567,freq=6.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.38880596 = fieldWeight in 5567, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5567)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Abstract
    Firmen handeln zunehmend global. Daraus ergibt sich für die Mehrzahl der Mitarbeiter solcher Unternehmen die Notwendigkeit, die englische Sprache, die "lingua franca" der weltweiten Geschäftsbeziehungen, zu beherrschen, um sie wirkungsvoll einsetzen zu können - und dies auch unter interkulturellem Aspekt. Durch die Globalisierung ist es unmöglich geworden, ohne Fremdsprachenkenntnisse am freien Markt zu agieren." (Trends in der Personalentwicklung, PEF-Consulting, Wien) Das Erreichen interkultureller Handlungskompetenz in der Fremdsprache ist das Ziel des SprachIernsystems ,Sunpower - Communication Strategies in English for Business Purposes", das am Fachbereich Sprachen der Fachhochschule Köln entstanden und im Frühjahr dieses Jahres auf dem Markt erschienen ist. Das Lernsystem ist in Kooperation des Fachbereichs Sprachen der Fachhochschule Köln mit einer englischen Solarenergie-Firma, einer Management Consulting Agentur und der Sprachenabteilung einer Londoner Hochschule entstanden
  7. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.008295215 = product of:
      0.020738035 = sum of:
        0.013134263 = product of:
          0.03940279 = sum of:
            0.03940279 = weight(_text_:f in 156) [ClassicSimilarity], result of:
              0.03940279 = score(doc=156,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.3082599 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
        0.007603772 = product of:
          0.030415088 = sum of:
            0.030415088 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.030415088 = score(doc=156,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  8. Sünkler, S.; Kerkmann, F.; Schultheiß, S.: Ok Google . the end of search as we know it : sprachgesteuerte Websuche im Test (2018) 0.01
    0.008097715 = product of:
      0.020244287 = sum of:
        0.013134263 = product of:
          0.03940279 = sum of:
            0.03940279 = weight(_text_:f in 5626) [ClassicSimilarity], result of:
              0.03940279 = score(doc=5626,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.3082599 = fieldWeight in 5626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5626)
          0.33333334 = coord(1/3)
        0.0071100234 = product of:
          0.028440094 = sum of:
            0.028440094 = weight(_text_:einer in 5626) [ClassicSimilarity], result of:
              0.028440094 = score(doc=5626,freq=2.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.26189008 = fieldWeight in 5626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5626)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Abstract
    Sprachsteuerungssysteme, die den Nutzer auf Zuruf unterstützen, werden im Zuge der Verbreitung von Smartphones und Lautsprechersystemen wie Amazon Echo oder Google Home zunehmend populär. Eine der zentralen Anwendungen dabei stellt die Suche in Websuchmaschinen dar. Wie aber funktioniert "googlen", wenn der Nutzer seine Suchanfrage nicht schreibt, sondern spricht? Dieser Frage ist ein Projektteam der HAW Hamburg nachgegangen und hat im Auftrag der Deutschen Telekom untersucht, wie effektiv, effizient und zufriedenstellend Google Now, Apple Siri, Microsoft Cortana sowie das Amazon Fire OS arbeiten. Ermittelt wurden Stärken und Schwächen der Systeme sowie Erfolgskriterien für eine hohe Gebrauchstauglichkeit. Diese Erkenntnisse mündeten in dem Prototyp einer optimalen Voice Web Search.
  9. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.01
    0.0068292664 = product of:
      0.03414633 = sum of:
        0.03414633 = product of:
          0.06829266 = sum of:
            0.04222259 = weight(_text_:einer in 1746) [ClassicSimilarity], result of:
              0.04222259 = score(doc=1746,freq=6.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.38880596 = fieldWeight in 1746, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
            0.026070075 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.026070075 = score(doc=1746,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(2/4)
      0.2 = coord(1/5)
    
    Abstract
    Im Rahmen dieser Arbeit wird eine Vorgehensweise entwickelt, die die Fixierung auf das Wort und die damit verbundenen Schwächen überwindet. Sie gestattet die Extraktion von Informationen anhand der repräsentierten Begriffe und bildet damit die Basis einer inhaltlichen Texterschließung. Die anschließende prototypische Realisierung dient dazu, die Konzeption zu überprüfen sowie ihre Möglichkeiten und Grenzen abzuschätzen und zu bewerten. Arbeiten zum Information Extraction widmen sich fast ausschließlich dem Englischen, wobei insbesondere im Bereich der Named Entities sehr gute Ergebnisse erzielt werden. Deutlich schlechter sehen die Resultate für weniger regelmäßige Sprachen wie beispielsweise das Deutsche aus. Aus diesem Grund sowie praktischen Erwägungen wie insbesondere der Vertrautheit des Autors damit, soll diese Sprache primär Gegenstand der Untersuchungen sein. Die Lösung von einer engen Termorientierung bei gleichzeitiger Betonung der repräsentierten Begriffe legt nahe, dass nicht nur die verwendeten Worte sekundär werden sondern auch die verwendete Sprache. Um den Rahmen dieser Arbeit nicht zu sprengen wird bei der Untersuchung dieses Punktes das Augenmerk vor allem auf die mit unterschiedlichen Sprachen verbundenen Schwierigkeiten und Besonderheiten gelegt.
    Date
    22. 3.2015 9:17:30
  10. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.01
    0.006368453 = product of:
      0.031842265 = sum of:
        0.031842265 = product of:
          0.09552679 = sum of:
            0.09552679 = weight(_text_:f in 733) [ClassicSimilarity], result of:
              0.09552679 = score(doc=733,freq=4.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.74733484 = fieldWeight in 733, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  11. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.0059251534 = product of:
      0.014812883 = sum of:
        0.009381617 = product of:
          0.028144851 = sum of:
            0.028144851 = weight(_text_:f in 1171) [ClassicSimilarity], result of:
              0.028144851 = score(doc=1171,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.22018565 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
        0.0054312656 = product of:
          0.021725062 = sum of:
            0.021725062 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.021725062 = score(doc=1171,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    23.11.2023 19:07:22
  12. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.0058855186 = product of:
      0.029427592 = sum of:
        0.029427592 = product of:
          0.058855183 = sum of:
            0.028440094 = weight(_text_:einer in 4184) [ClassicSimilarity], result of:
              0.028440094 = score(doc=4184,freq=2.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.26189008 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
            0.030415088 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.030415088 = score(doc=4184,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(2/4)
      0.2 = coord(1/5)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
  13. Latzer, F.-M.: Yo Computa! (1997) 0.01
    0.0052537057 = product of:
      0.026268527 = sum of:
        0.026268527 = product of:
          0.07880558 = sum of:
            0.07880558 = weight(_text_:f in 6005) [ClassicSimilarity], result of:
              0.07880558 = score(doc=6005,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.6165198 = fieldWeight in 6005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6005)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  14. Blanchon, E.: Terminology software : pt.1.2 (1995) 0.01
    0.0052537057 = product of:
      0.026268527 = sum of:
        0.026268527 = product of:
          0.07880558 = sum of:
            0.07880558 = weight(_text_:f in 6408) [ClassicSimilarity], result of:
              0.07880558 = score(doc=6408,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.6165198 = fieldWeight in 6408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6408)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Language
    f
  15. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.01
    0.0052537057 = product of:
      0.026268527 = sum of:
        0.026268527 = product of:
          0.07880558 = sum of:
            0.07880558 = weight(_text_:f in 6445) [ClassicSimilarity], result of:
              0.07880558 = score(doc=6445,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.6165198 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  16. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.01
    0.0050453893 = product of:
      0.025226947 = sum of:
        0.025226947 = product of:
          0.050453894 = sum of:
            0.028728832 = weight(_text_:einer in 734) [ClassicSimilarity], result of:
              0.028728832 = score(doc=734,freq=4.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.26454893 = fieldWeight in 734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
            0.021725062 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.021725062 = score(doc=734,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.5 = coord(2/4)
      0.2 = coord(1/5)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
  17. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0047401227 = product of:
      0.011850307 = sum of:
        0.007505294 = product of:
          0.022515882 = sum of:
            0.022515882 = weight(_text_:f in 4217) [ClassicSimilarity], result of:
              0.022515882 = score(doc=4217,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.17614852 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
        0.0043450124 = product of:
          0.01738005 = sum of:
            0.01738005 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.01738005 = score(doc=4217,freq=2.0), product of:
                0.11230291 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03206978 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    22. 1.2018 11:32:44
  18. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.00
    0.004627266 = product of:
      0.011568164 = sum of:
        0.007505294 = product of:
          0.022515882 = sum of:
            0.022515882 = weight(_text_:f in 548) [ClassicSimilarity], result of:
              0.022515882 = score(doc=548,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.17614852 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
        0.0040628705 = product of:
          0.016251482 = sum of:
            0.016251482 = weight(_text_:einer in 548) [ClassicSimilarity], result of:
              0.016251482 = score(doc=548,freq=2.0), product of:
                0.108595535 = queryWeight, product of:
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03206978 = queryNorm
                0.14965148 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3862264 = idf(docFreq=4066, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Abstract
    Die Möglichkeiten, die der heutigen Informations- und Wissensgesellschaft für die Beschaffung und den Austausch von Information zur Verfügung stehen, haben kurioserweise gleichzeitig ein immer akuter werdendes, neues Problem geschaffen: Es wird für jeden Einzelnen immer schwieriger, aus der gewaltigen Fülle der angebotenen Informationen die tatsächlich relevanten zu selektieren. Diese Arbeit untersucht die Möglichkeit, mit Hilfe von natürlichsprachlichen Schnittstellen den Zugang des Informationssuchenden zu Volltextdatenbanken zu verbessern. Dabei werden zunächst die wissenschaftlichen Fragestellungen ausführlich behandelt. Anschließend beschreibt der Autor verschiedene Lösungsansätze und stellt anhand einer natürlichsprachlichen Schnittstelle für den Brockhaus Multimedial 2004 deren erfolgreiche Implementierung vor
    Content
    5: Interaktion 5.1 Frage-Antwort- bzw. Dialogsysteme: Forschungen und Projekte 5.2 Darstellung und Visualisierung von Wissen 5.3 Das Dialogsystem im Rahmen des LeWi-Projektes 5.4 Ergebnisdarstellung und Antwortpräsentation im LeWi-Kontext 6: Testumgebungen und -ergebnisse 7: Ergebnisse und Ausblick 7.1 Ausgangssituation 7.2 Schlussfolgerungen 7.3 Ausblick Anhang A Auszüge aus der Grob- bzw. Feinklassifikation des BMM Anhang B MPRO - Formale Beschreibung der wichtigsten Merkmale ... Anhang C Fragentypologie mit Beispielsätzen (Auszug) Anhang D Semantische Merkmale im morphologischen Lexikon (Auszug) Anhang E Regelbeispiele für die Fragentypzuweisung Anhang F Aufstellung der möglichen Suchen im LeWi-Dialogmodul (Auszug) Anhang G Vollständiger Dialogbaum zu Beginn des Projektes Anhang H Statuszustände zur Ermittlung der Folgefragen (Auszug)
  19. Hull, D.; Ait-Mokhtar, S.; Chuat, M.; Eisele, A.; Gaussier, E.; Grefenstette, G.; Isabelle, P.; Samulesson, C.; Segand, F.: Language technologies and patent search and classification (2001) 0.00
    0.0045031765 = product of:
      0.022515882 = sum of:
        0.022515882 = product of:
          0.06754764 = sum of:
            0.06754764 = weight(_text_:f in 6318) [ClassicSimilarity], result of:
              0.06754764 = score(doc=6318,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.52844554 = fieldWeight in 6318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6318)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  20. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.00
    0.0045031765 = product of:
      0.022515882 = sum of:
        0.022515882 = product of:
          0.06754764 = sum of:
            0.06754764 = weight(_text_:f in 6441) [ClassicSimilarity], result of:
              0.06754764 = score(doc=6441,freq=2.0), product of:
                0.12782328 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03206978 = queryNorm
                0.52844554 = fieldWeight in 6441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6441)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    

Years

Languages

  • d 81
  • e 74
  • f 6
  • m 4
  • More… Less…

Types

  • a 124
  • m 19
  • el 14
  • s 13
  • x 8
  • p 2
  • d 1
  • More… Less…

Classifications