Search (158 results, page 1 of 8)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.34
    0.3429968 = product of:
      0.41159618 = sum of:
        0.05567471 = product of:
          0.16702412 = sum of:
            0.16702412 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.16702412 = score(doc=562,freq=2.0), product of:
                0.2971864 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03505379 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.16702412 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.16702412 = score(doc=562,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.012374603 = product of:
          0.024749206 = sum of:
            0.024749206 = weight(_text_:web in 562) [ClassicSimilarity], result of:
              0.024749206 = score(doc=562,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.21634221 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
        0.16702412 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.16702412 = score(doc=562,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.009498609 = product of:
          0.028495826 = sum of:
            0.028495826 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.028495826 = score(doc=562,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.8333333 = coord(5/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.25
    0.2455307 = product of:
      0.36829603 = sum of:
        0.16702412 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.16702412 = score(doc=563,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.024749206 = product of:
          0.049498413 = sum of:
            0.049498413 = weight(_text_:web in 563) [ClassicSimilarity], result of:
              0.049498413 = score(doc=563,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43268442 = fieldWeight in 563, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
        0.16702412 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.16702412 = score(doc=563,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.009498609 = product of:
          0.028495826 = sum of:
            0.028495826 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.028495826 = score(doc=563,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.19
    0.19486147 = product of:
      0.38972294 = sum of:
        0.05567471 = product of:
          0.16702412 = sum of:
            0.16702412 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.16702412 = score(doc=862,freq=2.0), product of:
                0.2971864 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03505379 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.16702412 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16702412 = score(doc=862,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.16702412 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.16702412 = score(doc=862,freq=2.0), product of:
            0.2971864 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03505379 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.018741453 = product of:
      0.056224357 = sum of:
        0.017861202 = product of:
          0.035722405 = sum of:
            0.035722405 = weight(_text_:web in 2541) [ClassicSimilarity], result of:
              0.035722405 = score(doc=2541,freq=6.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3122631 = fieldWeight in 2541, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
        0.038363155 = product of:
          0.05754473 = sum of:
            0.023962079 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
              0.023962079 = score(doc=2541,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.19432661 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
            0.033582654 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.033582654 = score(doc=2541,freq=4.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.6666667 = coord(2/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  5. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.02
    0.016426176 = product of:
      0.049278524 = sum of:
        0.038196813 = product of:
          0.07639363 = sum of:
            0.07639363 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
              0.07639363 = score(doc=4184,freq=14.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6677857 = fieldWeight in 4184, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
        0.01108171 = product of:
          0.03324513 = sum of:
            0.03324513 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.03324513 = score(doc=4184,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  6. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.01
    0.010351777 = product of:
      0.031055331 = sum of:
        0.017500332 = product of:
          0.035000663 = sum of:
            0.035000663 = weight(_text_:web in 2738) [ClassicSimilarity], result of:
              0.035000663 = score(doc=2738,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3059541 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
        0.013555 = product of:
          0.040664997 = sum of:
            0.040664997 = weight(_text_:29 in 2738) [ClassicSimilarity], result of:
              0.040664997 = score(doc=2738,freq=4.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3297832 = fieldWeight in 2738, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper presents an overview of automatic methods for building domain knowledge structures (domain models) from text collections. Applications of domain models have a long history within knowledge engineering and artificial intelligence. In the last couple of decades they have surfaced noticeably as a useful tool within natural language processing, information retrieval and semantic web technology. Inspired by the ubiquitous propagation of domain model structures that are emerging in several research disciplines, we give an overview of the current research landscape and some techniques and approaches. We will also discuss trade-offs between different approaches and point to some recent trends.
    Content
    Beitrag in einem Themenheft "Soft Approaches to IA on the Web". Vgl.: doi:10.1016/j.ipm.2011.07.002.
    Date
    29. 1.2016 18:29:51
  7. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.010210888 = product of:
      0.030632664 = sum of:
        0.024300257 = product of:
          0.048600513 = sum of:
            0.048600513 = weight(_text_:seite in 4217) [ClassicSimilarity], result of:
              0.048600513 = score(doc=4217,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.24753433 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
        0.0063324063 = product of:
          0.018997218 = sum of:
            0.018997218 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.018997218 = score(doc=4217,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Jetzt scheint es allmählich ans Eingemachte zu gehen. Ein von der chinesischen Alibaba-Gruppe entwickelte KI-Programm konnte erstmals Menschen in der Beantwortung von Fragen und dem Verständnis von Text schlagen. Die chinesische Regierung will das Land führend in der Entwicklung von Künstlicher Intelligenz machen und hat dafür eine nationale Strategie aufgestellt. Dazu ernannte das Ministerium für Wissenschaft und Technik die Internetkonzerne Baidu, Alibaba und Tencent sowie iFlyTek zum ersten nationalen Team für die Entwicklung der KI-Technik der nächsten Generation. Baidu ist zuständig für die Entwicklung autonomer Fahrzeuge, Alibaba für die Entwicklung von Clouds für "city brains" (Smart Cities sollen sich an ihre Einwohner und ihre Umgebung anpassen), Tencent für die Enwicklung von Computervision für medizinische Anwendungen und iFlyTec für "Stimmenintelligenz". Die vier Konzerne sollen offene Plattformen herstellen, die auch andere Firmen und Start-ups verwenden können. Überdies wird bei Peking für eine Milliarde US-Dollar ein Technologiepark für die Entwicklung von KI gebaut. Dabei geht es selbstverständlich nicht nur um zivile Anwendungen, sondern auch militärische. Noch gibt es in den USA mehr KI-Firmen, aber China liegt bereits an zweiter Stelle. Das Pentagon ist beunruhigt. Offenbar kommt China rasch vorwärts. Ende 2017 stellte die KI-Firma iFlyTek, die zunächst auf Stimmerkennung und digitale Assistenten spezialisiert war, einen Roboter vor, der den schriftlichen Test der nationalen Medizinprüfung erfolgreich bestanden hatte. Der Roboter war nicht nur mit immensem Wissen aus 53 medizinischen Lehrbüchern, 2 Millionen medizinischen Aufzeichnungen und 400.000 medizinischen Texten und Berichten gefüttert worden, er soll von Medizinexperten klinische Erfahrungen und Falldiagnosen übernommen haben. Eingesetzt werden soll er, in China herrscht vor allem auf dem Land, Ärztemangel, als Helfer, der mit der automatischen Auswertung von Patientendaten eine erste Diagnose erstellt und ansonsten Ärzten mit Vorschlägen zur Seite stehen.
    Date
    22. 1.2018 11:32:44
  8. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.008999648 = product of:
      0.02699894 = sum of:
        0.017500332 = product of:
          0.035000663 = sum of:
            0.035000663 = weight(_text_:web in 4436) [ClassicSimilarity], result of:
              0.035000663 = score(doc=4436,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3059541 = fieldWeight in 4436, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
        0.009498609 = product of:
          0.028495826 = sum of:
            0.028495826 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.028495826 = score(doc=4436,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
  9. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.01
    0.00853978 = product of:
      0.02561934 = sum of:
        0.0144370375 = product of:
          0.028874075 = sum of:
            0.028874075 = weight(_text_:web in 3510) [ClassicSimilarity], result of:
              0.028874075 = score(doc=3510,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.25239927 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
        0.011182303 = product of:
          0.033546906 = sum of:
            0.033546906 = weight(_text_:29 in 3510) [ClassicSimilarity], result of:
              0.033546906 = score(doc=3510,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.27205724 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  10. Rolland, M.T.: Logotechnik als Grundlage einer vollautomatischen Sprachverarbeitung (1995) 0.01
    0.008100086 = product of:
      0.048600513 = sum of:
        0.048600513 = product of:
          0.09720103 = sum of:
            0.09720103 = weight(_text_:seite in 1313) [ClassicSimilarity], result of:
              0.09720103 = score(doc=1313,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.49506867 = fieldWeight in 1313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1313)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Mit Hilfe der Logotechnik, der rein semantikorientierten Methoden der Sprachverarbeitung, ist es möglich, die Sprache in ihren Regeln und Aufbaugesetzmäßigkeiten zu durchschauen und damit einer vollautomatischen Verarbeitung zugänglich zu machen. Semantik meint die geistige Seite der Sprache, die die Syntax impliziert. Im Zentrum der Betrachtungen steht das Wort, sein Inhalt und die von diesem bedingten Sprachstrukturen. Auf der Basis der Erkenntnisse vom Aufbau der Sprache ist die Konzeption eines Dialogsystems, und zwar eines Systems zur Wissensabfrage, dargestellt. Abschließend erfolgen Hinweise auf weitere mögliche Anwendungen, von denen die maschinelle Übersetzung von zentraler Wichtigkeit ist
  11. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.01
    0.0074213385 = product of:
      0.04452803 = sum of:
        0.04452803 = product of:
          0.06679204 = sum of:
            0.033546906 = weight(_text_:29 in 330) [ClassicSimilarity], result of:
              0.033546906 = score(doc=330,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.27205724 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
            0.03324513 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.03324513 = score(doc=330,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2708308 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.29-34
  12. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.01
    0.0073198117 = product of:
      0.021959435 = sum of:
        0.012374603 = product of:
          0.024749206 = sum of:
            0.024749206 = weight(_text_:web in 3296) [ClassicSimilarity], result of:
              0.024749206 = score(doc=3296,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.21634221 = fieldWeight in 3296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.5 = coord(1/2)
        0.009584831 = product of:
          0.028754493 = sum of:
            0.028754493 = weight(_text_:29 in 3296) [ClassicSimilarity], result of:
              0.028754493 = score(doc=3296,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23319192 = fieldWeight in 3296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    User-generated content on the Web has become an extremely valuable source for mining and analyzing user opinions on any topic. Recent years have seen an increasing body of work investigating methods to recognize favorable and unfavorable sentiments toward specific subjects from online text. However, most of these efforts focus on English and there have been very few studies on sentiment analysis of Chinese content. This paper aims to address the unique challenges posed by Chinese sentiment analysis. We propose a rule-based approach including two phases: (1) determining each sentence's sentiment based on word dependency, and (2) aggregating sentences to predict the document sentiment. We report the results of an experimental study comparing our approach with three machine learning-based approaches using two sets of Chinese articles. These results illustrate the effectiveness of our proposed method and its advantages against learning-based approaches.
    Date
    2. 2.2010 19:29:56
  13. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.006659298 = product of:
      0.019977894 = sum of:
        0.0144370375 = product of:
          0.028874075 = sum of:
            0.028874075 = weight(_text_:web in 1616) [ClassicSimilarity], result of:
              0.028874075 = score(doc=1616,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.25239927 = fieldWeight in 1616, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
        0.005540855 = product of:
          0.016622566 = sum of:
            0.016622566 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.016622566 = score(doc=1616,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  14. Carter-Sigglow, J.: ¬Die Rolle der Sprache bei der Informationsvermittlung (2001) 0.01
    0.0060750647 = product of:
      0.036450386 = sum of:
        0.036450386 = product of:
          0.07290077 = sum of:
            0.07290077 = weight(_text_:seite in 5882) [ClassicSimilarity], result of:
              0.07290077 = score(doc=5882,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3713015 = fieldWeight in 5882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5882)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In der Zeit des Internets und E-Commerce müssen auch deutsche Informationsfachleute ihre Dienste auf Englisch anbieten und sogar auf Englisch gestalten, um die internationale Community zu erreichen. Auf der anderen Seite spielt gerade auf dem Wissensmarkt Europa die sprachliche Identität der einzelnen Nationen eine große Rolle. In diesem Spannungsfeld zwischen Globalisierung und Lokalisierung arbeiten Informationsvermittler und werden dabei von Sprachspezialisten unterstützt. Man muss sich darüber im Klaren sein, dass jede Sprache - auch die für international gehaltene Sprache Englisch - eine Sprachgemeinschaft darstellt. In diesem Beitrag wird anhand aktueller Beispiele gezeigt, dass Sprache nicht nur grammatikalisch und terminologisch korrekt sein muss, sie soll auch den sprachlichen Erwartungen der Rezipienten gerecht werden, um die Grenzen der Sprachwelt nicht zu verletzen. Die Rolle der Sprachspezialisten besteht daher darin, die Informationsvermittlung zwischen diesen Welten reibungslos zu gestalten
  15. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.01
    0.0059537343 = product of:
      0.035722405 = sum of:
        0.035722405 = product of:
          0.07144481 = sum of:
            0.07144481 = weight(_text_:web in 2027) [ClassicSimilarity], result of:
              0.07144481 = score(doc=2027,freq=6.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6245262 = fieldWeight in 2027, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2027)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; Bd. 9088
    Source
    The Semantic Web: latest advances and new domains. 12th European Semantic Web Conference, ESWC 2015 Portoroz, Slovenia, May 31 -- June 4, 2015. Proceedings. Eds.: F. Gandon u.a
  16. Kuhlen, R.: Morphologische Relationen durch Reduktionsalgorithmen (1974) 0.01
    0.005271389 = product of:
      0.031628333 = sum of:
        0.031628333 = product of:
          0.09488499 = sum of:
            0.09488499 = weight(_text_:29 in 4251) [ClassicSimilarity], result of:
              0.09488499 = score(doc=4251,freq=4.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.7694941 = fieldWeight in 4251, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4251)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 1.2011 14:56:29
  17. Räwel, J.: Automatisierte Kommunikation (2023) 0.01
    0.0050625536 = product of:
      0.03037532 = sum of:
        0.03037532 = product of:
          0.06075064 = sum of:
            0.06075064 = weight(_text_:seite in 909) [ClassicSimilarity], result of:
              0.06075064 = score(doc=909,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3094179 = fieldWeight in 909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=909)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    https://www.telepolis.de/features/Automatisierte-Kommunikation-7520683.html?seite=all
  18. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.00
    0.0046115043 = product of:
      0.027669026 = sum of:
        0.027669026 = product of:
          0.083007075 = sum of:
            0.083007075 = weight(_text_:29 in 925) [ClassicSimilarity], result of:
              0.083007075 = score(doc=925,freq=6.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6731671 = fieldWeight in 925, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2023 13:23:26
    29. 3.2023 13:29:19
  19. Wettler, M.; Rapp, R.; Ferber, R.: Freie Assoziationen und Kontiguitäten von Wörtern in Texten (1993) 0.00
    0.0042599253 = product of:
      0.02555955 = sum of:
        0.02555955 = product of:
          0.07667865 = sum of:
            0.07667865 = weight(_text_:29 in 2140) [ClassicSimilarity], result of:
              0.07667865 = score(doc=2140,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6218451 = fieldWeight in 2140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2140)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    4.11.1998 14:30:29
  20. Warner, A.J.: Natural language processing (1987) 0.00
    0.004221604 = product of:
      0.025329625 = sum of:
        0.025329625 = product of:
          0.075988874 = sum of:
            0.075988874 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.075988874 = score(doc=337,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108

Years

Languages

  • e 111
  • d 43
  • m 2
  • ru 2
  • More… Less…

Types

  • a 128
  • el 20
  • m 16
  • s 8
  • x 4
  • p 2
  • d 1
  • More… Less…

Classifications