Search (115 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.27
    0.26928505 = product of:
      0.3366063 = sum of:
        0.07161157 = product of:
          0.21483469 = sum of:
            0.21483469 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.21483469 = score(doc=562,freq=2.0), product of:
                0.38225585 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045087915 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.03183365 = weight(_text_:web in 562) [ClassicSimilarity], result of:
          0.03183365 = score(doc=562,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.21634221 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.21483469 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.21483469 = score(doc=562,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01832637 = product of:
          0.03665274 = sum of:
            0.03665274 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03665274 = score(doc=562,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.18
    0.17809702 = product of:
      0.29682836 = sum of:
        0.0636673 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.0636673 = score(doc=563,freq=8.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.21483469 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.21483469 = score(doc=563,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.01832637 = product of:
          0.03665274 = sum of:
            0.03665274 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03665274 = score(doc=563,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.11457851 = product of:
      0.28644627 = sum of:
        0.07161157 = product of:
          0.21483469 = sum of:
            0.21483469 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21483469 = score(doc=862,freq=2.0), product of:
                0.38225585 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045087915 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.21483469 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.21483469 = score(doc=862,freq=2.0), product of:
            0.38225585 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.045087915 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.05
    0.049234957 = product of:
      0.08205826 = sum of:
        0.03422862 = weight(_text_:wide in 1616) [ClassicSimilarity], result of:
          0.03422862 = score(doc=1616,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.171337 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.03713926 = weight(_text_:web in 1616) [ClassicSimilarity], result of:
          0.03713926 = score(doc=1616,freq=8.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.25239927 = fieldWeight in 1616, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.010690383 = product of:
          0.021380765 = sum of:
            0.021380765 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.021380765 = score(doc=1616,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  5. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.05
    0.047856808 = product of:
      0.11964202 = sum of:
        0.09826125 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
          0.09826125 = score(doc=4184,freq=14.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.6677857 = fieldWeight in 4184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.021380765 = product of:
          0.04276153 = sum of:
            0.04276153 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.04276153 = score(doc=4184,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  6. Thelwall, M.; Price, L.: Language evolution and the spread of ideas on the Web : a procedure for identifying emergent hybrid word (2006) 0.05
    0.045526054 = product of:
      0.11381513 = sum of:
        0.058677625 = weight(_text_:wide in 5896) [ClassicSimilarity], result of:
          0.058677625 = score(doc=5896,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.29372054 = fieldWeight in 5896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
        0.055137504 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.055137504 = score(doc=5896,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.37471575 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
      0.4 = coord(2/5)
    
    Abstract
    Word usage is of interest to linguists for its own sake as well as to social scientists and others who seek to track the spread of ideas, for example, in public debates over political decisions. The historical evolution of language can be analyzed with the tools of corpus linguistics through evolving corpora and the Web. But word usage statistics can only be gathered for known words. In this article, techniques are described and tested for identifying new words from the Web, focusing on the case when the words are related to a topic and have a hybrid form with a common sequence of letters. The results highlight the need to employ a combination of search techniques and show the wide potential of hybrid word family investigations in linguistics and social science.
  7. Chowdhury, G.G.: Natural language processing (2002) 0.04
    0.036204513 = product of:
      0.09051128 = sum of:
        0.058677625 = weight(_text_:wide in 4284) [ClassicSimilarity], result of:
          0.058677625 = score(doc=4284,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.29372054 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.03183365 = weight(_text_:web in 4284) [ClassicSimilarity], result of:
          0.03183365 = score(doc=4284,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.21634221 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
      0.4 = coord(2/5)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
  8. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.03
    0.030170426 = product of:
      0.075426064 = sum of:
        0.048898023 = weight(_text_:wide in 1338) [ClassicSimilarity], result of:
          0.048898023 = score(doc=1338,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.24476713 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
        0.026528042 = weight(_text_:web in 1338) [ClassicSimilarity], result of:
          0.026528042 = score(doc=1338,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.18028519 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
      0.4 = coord(2/5)
    
    Abstract
    A user's query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques model syntagmatic associations that infer two terms co-occur more often than by chance in natural language. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches to query expansion and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process improves retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
  9. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.03
    0.0270183 = product of:
      0.06754575 = sum of:
        0.045947917 = weight(_text_:web in 2541) [ClassicSimilarity], result of:
          0.045947917 = score(doc=2541,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.3122631 = fieldWeight in 2541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.021597834 = product of:
          0.04319567 = sum of:
            0.04319567 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.04319567 = score(doc=2541,freq=4.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  10. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.03
    0.025338382 = product of:
      0.063345954 = sum of:
        0.045019582 = weight(_text_:web in 4436) [ClassicSimilarity], result of:
          0.045019582 = score(doc=4436,freq=4.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.3059541 = fieldWeight in 4436, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.01832637 = product of:
          0.03665274 = sum of:
            0.03665274 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.03665274 = score(doc=4436,freq=2.0), product of:
                0.1578902 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045087915 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
  11. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.02
    0.02492866 = product of:
      0.124643296 = sum of:
        0.124643296 = product of:
          0.24928659 = sum of:
            0.24928659 = weight(_text_:suchmaschine in 866) [ClassicSimilarity], result of:
              0.24928659 = score(doc=866,freq=10.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.9778302 = fieldWeight in 866, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=866)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    ChatGPT, die künstliche Intelligenz der Stunde, ist von OpenAI entwickelt worden. Und OpenAI ist in der Vergangenheit nicht unerheblich von Microsoft unterstützt worden. Nun geht es ums Profitieren: Die KI soll in die Suchmaschine Bing eingebaut werden, was eine direkte Konkurrenz zu Googles Suchalgorithmen und Intelligenzen bedeutet. Bing war da bislang nicht sonderlich erfolgreich. Wie "The Information" mit Verweis auf zwei Insider berichtet, plant Microsoft, ChatGPT in seine Suchmaschine Bing einzubauen. Bereits im März könnte die neue, intelligente Suche verfügbar sein. Microsoft hatte zuvor auf der hauseigenen Messe Ignite zunächst die Integration des Bildgenerators DALL·E 2 in seine Suchmaschine angekündigt - ohne konkretes Startdatum jedoch. Fragt man ChatGPT selbst, bestätigt der Chatbot seine künftige Aufgabe noch nicht. Weiß aber um potentielle Vorteile.
    Source
    https://www.heise.de/news/ChatGPT-soll-es-richten-Microsoft-baut-KI-in-Suchmaschine-Bing-ein-7447837.html
  12. Artemenko, O.; Shramko, M.: Entwicklung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten (2005) 0.02
    0.024196018 = product of:
      0.060490042 = sum of:
        0.03422862 = weight(_text_:wide in 572) [ClassicSimilarity], result of:
          0.03422862 = score(doc=572,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.171337 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.026261423 = weight(_text_:web in 572) [ClassicSimilarity], result of:
          0.026261423 = score(doc=572,freq=4.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.17847323 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
      0.4 = coord(2/5)
    
    Abstract
    Mit der Verbreitung des Internets vermehrt sich die Menge der im World Wide Web verfügbaren Dokumente. Die Gewährleistung eines effizienten Zugangs zu gewünschten Informationen für die Internetbenutzer wird zu einer großen Herausforderung an die moderne Informationsgesellschaft. Eine Vielzahl von Werkzeugen wird bereits eingesetzt, um den Nutzern die Orientierung in der wachsenden Informationsflut zu erleichtern. Allerdings stellt die enorme Menge an unstrukturierten und verteilten Informationen nicht die einzige Schwierigkeit dar, die bei der Entwicklung von Werkzeugen dieser Art zu bewältigen ist. Die zunehmende Vielsprachigkeit von Web-Inhalten resultiert in dem Bedarf an Sprachidentifikations-Software, die Sprache/en von elektronischen Dokumenten zwecks gezielter Weiterverarbeitung identifiziert. Solche Sprachidentifizierer können beispielsweise effektiv im Bereich des Multilingualen Information Retrieval eingesetzt werden, da auf den Sprachidentifikationsergebnissen Prozesse der automatischen Indexbildung wie Stemming, Stoppwörterextraktion etc. aufbauen. In der vorliegenden Arbeit wird das neue System "LangIdent" zur Sprachidentifikation von elektronischen Textdokumenten vorgestellt, das in erster Linie für Lehre und Forschung an der Universität Hildesheim verwendet werden soll. "LangIdent" enthält eine Auswahl von gängigen Algorithmen zu der monolingualen Sprachidentifikation, die durch den Benutzer interaktiv ausgewählt und eingestellt werden können. Zusätzlich wurde im System ein neuer Algorithmus implementiert, der die Identifikation von Sprachen, in denen ein multilinguales Dokument verfasst ist, ermöglicht. Die Identifikation beschränkt sich nicht nur auf eine Aufzählung von gefundenen Sprachen, vielmehr wird der Text in monolinguale Abschnitte aufgeteilt, jeweils mit der Angabe der identifizierten Sprache.
  13. Yang, C.C.; Li, K.W.: Automatic construction of English/Chinese parallel corpora (2003) 0.02
    0.02413634 = product of:
      0.06034085 = sum of:
        0.03911842 = weight(_text_:wide in 1683) [ClassicSimilarity], result of:
          0.03911842 = score(doc=1683,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.1958137 = fieldWeight in 1683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
        0.021222433 = weight(_text_:web in 1683) [ClassicSimilarity], result of:
          0.021222433 = score(doc=1683,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.14422815 = fieldWeight in 1683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
      0.4 = coord(2/5)
    
    Abstract
    As the demand for global information increases significantly, multilingual corpora has become a valuable linguistic resource for applications to cross-lingual information retrieval and natural language processing. In order to cross the boundaries that exist between different languages, dictionaries are the most typical tools. However, the general-purpose dictionary is less sensitive in both genre and domain. It is also impractical to manually construct tailored bilingual dictionaries or sophisticated multilingual thesauri for large applications. Corpusbased approaches, which do not have the limitation of dictionaries, provide a statistical translation model with which to cross the language boundary. There are many domain-specific parallel or comparable corpora that are employed in machine translation and cross-lingual information retrieval. Most of these are corpora between Indo-European languages, such as English/French and English/Spanish. The Asian/Indo-European corpus, especially English/Chinese corpus, is relatively sparse. The objective of the present research is to construct English/ Chinese parallel corpus automatically from the World Wide Web. In this paper, an alignment method is presented which is based an dynamic programming to identify the one-to-one Chinese and English title pairs. The method includes alignment at title level, word level and character level. The longest common subsequence (LCS) is applied to find the most reliabie Chinese translation of an English word. As one word for a language may translate into two or more words repetitively in another language, the edit operation, deletion, is used to resolve redundancy. A score function is then proposed to determine the optimal title pairs. Experiments have been conducted to investigate the performance of the proposed method using the daily press release articles by the Hong Kong SAR government as the test bed. The precision of the result is 0.998 while the recall is 0.806. The release articles and speech articles, published by Hongkong & Shanghai Banking Corporation Limited, are also used to test our method, the precision is 1.00, and the recall is 0.948.
  14. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.02
    0.0211193 = product of:
      0.05279825 = sum of:
        0.03422862 = weight(_text_:wide in 5089) [ClassicSimilarity], result of:
          0.03422862 = score(doc=5089,freq=2.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.171337 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.01856963 = weight(_text_:web in 5089) [ClassicSimilarity], result of:
          0.01856963 = score(doc=5089,freq=2.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.12619963 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.4 = coord(2/5)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  15. Winterschladen, S.; Gurevych, I.: ¬Die perfekte Suchmaschine : Forschungsgruppe entwickelt ein System, das artverwandte Begriffe finden soll (2006) 0.02
    0.01848759 = product of:
      0.092437945 = sum of:
        0.092437945 = product of:
          0.18487589 = sum of:
            0.18487589 = weight(_text_:suchmaschine in 5912) [ClassicSimilarity], result of:
              0.18487589 = score(doc=5912,freq=22.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.72517824 = fieldWeight in 5912, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5912)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    "KÖLNER STADT-ANZEIGER: Frau Gurevych, Sie entwickeln eine Suchmaschine der nächsten Generation? Wie kann man sich diese vorstellen? IRYNA GUREVYCH Jeder kennt die herkömmlichen Suchmaschinen wie Google, Yahoo oder Altavista. Diese sind aber nicht perfekt, weil sie nur nach dem Prinzip der Zeichenerkennung funktionieren. Das steigende Informationsbedürfnis können herkömmliche Suchmaschinen nicht befriedigen. KStA: Wieso nicht? GUREVYCH Nehmen wir mal ein konkretes Beispiel: Sie suchen bei Google nach einem Rezept für einen Kuchen, der aber kein Obst enthalten soll. Keine Suchmaschine der Welt kann bisher sinnvoll solche oder ähnliche Anfragen ausführen. Meistens kommen Tausende von Ergebnissen, in denen der Nutzer die relevanten Informationen wie eine Nadel im Heuhaufen suchen muss. KStA: Und Sie können dieses Problem lösen? GUREVYCH Wir entwickeln eine Suchmaschine, die sich nicht nur auf das System der Zeichenerkennung verlässt, sondern auch linguistische Merkmale nutzt. Unsere Suchmaschine soll also auch artverwandte Begriffe zeigen. KStA: Wie weit sind Sie mit Ihrer Forschung? GUREVYCH Das Projekt ist auf zwei Jahre angelegt. Wir haben vor einem halben Jahr begonnen, haben also noch einen großen Teil vor uns. Trotzdem sind die ersten Zwischenergebnisse schon sehr beachtlich. KStA: Und wann geht die Suchmaschine ins Internet? GUREVYCH Da es sich um ein Projekt der Deutschen Forschungsgemeinschaft handelt, wird die Suchmaschine vorerst nicht veröffentlicht. Wir sehen es als unsere Aufgabe an, Verbesserungsmöglichkeiten durch schlaue Such-Algorithmen mit unseren Forschungsarbeiten nachzuweisen und Fehler der bekannten Suchmaschinen zu beseitigen. Und da sind wir auf einem guten Weg. KStA: Arbeiten Sie auch an einem ganz speziellen Projekt? GUREVYCH Ja, ihre erste Bewährungsprobe muss die neue Technologie auf einem auf den ersten Blick ungewöhnlichen Feld bestehen: Unsere Forschungsgruppe an der Technischen Universität Darmstadt entwickelt derzeit ein neuartiges System zur Unterstützung Jugendlicher bei der Berufsauswahl. Dazu stellt uns die Bundesagentur für Arbeit die Beschreibungen von 5800 Berufen in Deutschland zur Verfügung. KStA: Und was sollen Sie dann mit diesen konkreten Informationen machen? GUREVYCH Jugendliche sollen unsere Suchmaschine mit einem Aufsatz über ihre beruflichen Vorlieben flittern. Das System soll dann eine Suchabfrage starten und mögliche Berufe anhand des Interesses des Jugendlichen heraussuchen. Die persönliche Beratung durch die Bundesagentur für Arbeit kann dadurch auf alternative Angebote ausgeweitet werden. Ein erster Prototyp soll Ende des Jahres bereitstehen. KStA: Es geht also zunächst einmal nicht darum, einen Jobfür den Jugendlichen zu finden, sondern den perfekten Beruf für ihn zu ermitteln? GUREVYCH Ja, anhand der Beschreibung des Jugendlichen startet die Suchmaschine eine semantische Abfrage und sucht den passenden Beruf heraus. KStA: Gab es schon weitere Anfragen seitens der Industrie? GUREVYCH Nein, wir haben bisher noch keine Werbung betrieben. Meine Erfahrung zeigt, dass angesehene Kongresse die beste Plattform sind, um die Ergebnisse zu präsentieren und auf sich aufmerksam zu machen. Einige erste Veröffentlichungen sind bereits unterwegs und werden 2006 noch erscheinen. KStA: Wie sieht denn Ihrer Meinung nach die Suchmaschine der Zukunft aus? GUREVYCH Suchmaschinen werden immer spezieller. Das heißt, dass es etwa in der Medizin, bei den Krankenkassen oder im Sport eigene Suchmaschinen geben wird. Außerdem wird die Tendenz verstärkt zu linguistischen Suchmaschinen gehen, die nach artverwandten Begriffen fahnden. Die perfekte Suchmaschine wird wohl eine Kombination aus statistischem und linguistisch-semantischem Suchverhalten sein. Algorithmen, die wir am Fachgebiet Telekooperation an der TU Darmstadt entwickeln, werden für den nächsten qualitativen Sprung bei der Entwicklung der Suchmaschinen von größter Bedeutung sein."
  16. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.02
    0.018379167 = product of:
      0.09189583 = sum of:
        0.09189583 = weight(_text_:web in 2027) [ClassicSimilarity], result of:
          0.09189583 = score(doc=2027,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.6245262 = fieldWeight in 2027, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.2 = coord(1/5)
    
    Series
    Information Systems and Applications, incl. Internet/Web, and HCI; Bd. 9088
    Source
    The Semantic Web: latest advances and new domains. 12th European Semantic Web Conference, ESWC 2015 Portoroz, Slovenia, May 31 -- June 4, 2015. Proceedings. Eds.: F. Gandon u.a
  17. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.02
    0.01659654 = product of:
      0.082982704 = sum of:
        0.082982704 = weight(_text_:wide in 870) [ClassicSimilarity], result of:
          0.082982704 = score(doc=870,freq=4.0), product of:
            0.19977365 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.045087915 = queryNorm
            0.4153836 = fieldWeight in 870, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=870)
      0.2 = coord(1/5)
    
    Abstract
    Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
  18. Lobo, S.: ¬Das Ende von Google, wie wir es kannten : Bessere Treffer durch ChatGPT (2022) 0.02
    0.015926337 = product of:
      0.07963168 = sum of:
        0.07963168 = product of:
          0.15926336 = sum of:
            0.15926336 = weight(_text_:suchmaschine in 852) [ClassicSimilarity], result of:
              0.15926336 = score(doc=852,freq=2.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.62471277 = fieldWeight in 852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.078125 = fieldNorm(doc=852)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Höchste Alarmstufe bei der weltgrößten Suchmaschine: Mit ChatGPT und künstlicher Intelligenz könnte eine neue Ära beginnen.
  19. Granitzer, M.: Statistische Verfahren der Textanalyse (2006) 0.01
    0.012865419 = product of:
      0.06432709 = sum of:
        0.06432709 = weight(_text_:web in 5809) [ClassicSimilarity], result of:
          0.06432709 = score(doc=5809,freq=6.0), product of:
            0.14714488 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.045087915 = queryNorm
            0.43716836 = fieldWeight in 5809, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5809)
      0.2 = coord(1/5)
    
    Abstract
    Der vorliegende Artikel bietet einen Überblick über statistische Verfahren der Textanalyse im Kontext des Semantic Webs. Als Einleitung erfolgt die Diskussion von Methoden und gängigen Techniken zur Vorverarbeitung von Texten wie z. B. Stemming oder Part-of-Speech Tagging. Die so eingeführten Repräsentationsformen dienen als Basis für statistische Merkmalsanalysen sowie für weiterführende Techniken wie Information Extraction und maschinelle Lernverfahren. Die Darstellung dieser speziellen Techniken erfolgt im Überblick, wobei auf die wichtigsten Aspekte in Bezug auf das Semantic Web detailliert eingegangen wird. Die Anwendung der vorgestellten Techniken zur Erstellung und Wartung von Ontologien sowie der Verweis auf weiterführende Literatur bilden den Abschluss dieses Artikels.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
    Theme
    Semantic Web
  20. Nhongkai, S.N.; Bentz, H.-J.: Bilinguale Suche mittels Konzeptnetzen (2006) 0.01
    0.012741068 = product of:
      0.06370534 = sum of:
        0.06370534 = product of:
          0.12741068 = sum of:
            0.12741068 = weight(_text_:suchmaschine in 3914) [ClassicSimilarity], result of:
              0.12741068 = score(doc=3914,freq=2.0), product of:
                0.25493854 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.045087915 = queryNorm
                0.4997702 = fieldWeight in 3914, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3914)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Eine neue Methode der Volltextsuche in bilingualen Textsammlungen wird vorgestellt und anhand eines parallelen Textkorpus (Englisch-Deutsch) geprüft. Die Brücke liefern passende Wortcluster, die aus einer Kookkurrenzanalyse stammen, geliefert von der neuartigen Suchmaschine SENTRAX (Essente Extractor Engine). Diese Cluster repräsentieren Konzepte, die sich in beiden Textsammlungen finden. Die Hypothese ist, dass das Finden mittels solcher Strukturvergleiche erfolgreich möglich ist.

Years

Languages

  • e 85
  • d 30
  • m 2
  • More… Less…

Types

  • a 88
  • el 15
  • m 14
  • s 8
  • p 3
  • x 3
  • d 1
  • More… Less…

Classifications