Search (129 results, page 1 of 7)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.31
    0.3061833 = product of:
      0.4898933 = sum of:
        0.063618824 = product of:
          0.19085647 = sum of:
            0.19085647 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19085647 = score(doc=562,freq=2.0), product of:
                0.33959135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040055543 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.19085647 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19085647 = score(doc=562,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.02828062 = weight(_text_:web in 562) [ClassicSimilarity], result of:
          0.02828062 = score(doc=562,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.21634221 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.19085647 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19085647 = score(doc=562,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01628092 = product of:
          0.03256184 = sum of:
            0.03256184 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03256184 = score(doc=562,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.625 = coord(5/8)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.27
    0.26732445 = product of:
      0.5346489 = sum of:
        0.063618824 = product of:
          0.19085647 = sum of:
            0.19085647 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19085647 = score(doc=862,freq=2.0), product of:
                0.33959135 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040055543 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.19085647 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19085647 = score(doc=862,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.08931716 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
          0.08931716 = score(doc=862,freq=2.0), product of:
            0.23231146 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.040055543 = queryNorm
            0.3844716 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.19085647 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19085647 = score(doc=862,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(4/8)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.23
    0.22727756 = product of:
      0.45455512 = sum of:
        0.19085647 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19085647 = score(doc=563,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.05656124 = weight(_text_:web in 563) [ClassicSimilarity], result of:
          0.05656124 = score(doc=563,freq=8.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.43268442 = fieldWeight in 563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.19085647 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19085647 = score(doc=563,freq=2.0), product of:
            0.33959135 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040055543 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.01628092 = product of:
          0.03256184 = sum of:
            0.03256184 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03256184 = score(doc=563,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.05
    0.047891665 = product of:
      0.09578333 = sum of:
        0.022883795 = weight(_text_:world in 1616) [ClassicSimilarity], result of:
          0.022883795 = score(doc=1616,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.14863437 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.030408278 = weight(_text_:wide in 1616) [ClassicSimilarity], result of:
          0.030408278 = score(doc=1616,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.171337 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.032994058 = weight(_text_:web in 1616) [ClassicSimilarity], result of:
          0.032994058 = score(doc=1616,freq=8.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 1616, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.009497203 = product of:
          0.018994406 = sum of:
            0.018994406 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.018994406 = score(doc=1616,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
  5. Chowdhury, G.G.: Natural language processing (2002) 0.04
    0.044864424 = product of:
      0.11963846 = sum of:
        0.039229363 = weight(_text_:world in 4284) [ClassicSimilarity], result of:
          0.039229363 = score(doc=4284,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.25480178 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.052128475 = weight(_text_:wide in 4284) [ClassicSimilarity], result of:
          0.052128475 = score(doc=4284,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.29372054 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.02828062 = weight(_text_:web in 4284) [ClassicSimilarity], result of:
          0.02828062 = score(doc=4284,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.21634221 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
      0.375 = coord(3/8)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
  6. Yang, C.C.; Li, K.W.: Automatic construction of English/Chinese parallel corpora (2003) 0.03
    0.029909614 = product of:
      0.07975897 = sum of:
        0.026152909 = weight(_text_:world in 1683) [ClassicSimilarity], result of:
          0.026152909 = score(doc=1683,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.16986786 = fieldWeight in 1683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
        0.034752317 = weight(_text_:wide in 1683) [ClassicSimilarity], result of:
          0.034752317 = score(doc=1683,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.1958137 = fieldWeight in 1683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
        0.018853746 = weight(_text_:web in 1683) [ClassicSimilarity], result of:
          0.018853746 = score(doc=1683,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.14422815 = fieldWeight in 1683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
      0.375 = coord(3/8)
    
    Abstract
    As the demand for global information increases significantly, multilingual corpora has become a valuable linguistic resource for applications to cross-lingual information retrieval and natural language processing. In order to cross the boundaries that exist between different languages, dictionaries are the most typical tools. However, the general-purpose dictionary is less sensitive in both genre and domain. It is also impractical to manually construct tailored bilingual dictionaries or sophisticated multilingual thesauri for large applications. Corpusbased approaches, which do not have the limitation of dictionaries, provide a statistical translation model with which to cross the language boundary. There are many domain-specific parallel or comparable corpora that are employed in machine translation and cross-lingual information retrieval. Most of these are corpora between Indo-European languages, such as English/French and English/Spanish. The Asian/Indo-European corpus, especially English/Chinese corpus, is relatively sparse. The objective of the present research is to construct English/ Chinese parallel corpus automatically from the World Wide Web. In this paper, an alignment method is presented which is based an dynamic programming to identify the one-to-one Chinese and English title pairs. The method includes alignment at title level, word level and character level. The longest common subsequence (LCS) is applied to find the most reliabie Chinese translation of an English word. As one word for a language may translate into two or more words repetitively in another language, the edit operation, deletion, is used to resolve redundancy. A score function is then proposed to determine the optimal title pairs. Experiments have been conducted to investigate the performance of the proposed method using the daily press release articles by the Hong Kong SAR government as the test bed. The precision of the result is 0.998 while the recall is 0.806. The release articles and speech articles, published by Hongkong & Shanghai Banking Corporation Limited, are also used to test our method, the precision is 1.00, and the recall is 0.948.
  7. Artemenko, O.; Shramko, M.: Entwicklung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten (2005) 0.03
    0.028733399 = product of:
      0.0766224 = sum of:
        0.022883795 = weight(_text_:world in 572) [ClassicSimilarity], result of:
          0.022883795 = score(doc=572,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.14863437 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.030408278 = weight(_text_:wide in 572) [ClassicSimilarity], result of:
          0.030408278 = score(doc=572,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.171337 = fieldWeight in 572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
        0.023330323 = weight(_text_:web in 572) [ClassicSimilarity], result of:
          0.023330323 = score(doc=572,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.17847323 = fieldWeight in 572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=572)
      0.375 = coord(3/8)
    
    Abstract
    Mit der Verbreitung des Internets vermehrt sich die Menge der im World Wide Web verfügbaren Dokumente. Die Gewährleistung eines effizienten Zugangs zu gewünschten Informationen für die Internetbenutzer wird zu einer großen Herausforderung an die moderne Informationsgesellschaft. Eine Vielzahl von Werkzeugen wird bereits eingesetzt, um den Nutzern die Orientierung in der wachsenden Informationsflut zu erleichtern. Allerdings stellt die enorme Menge an unstrukturierten und verteilten Informationen nicht die einzige Schwierigkeit dar, die bei der Entwicklung von Werkzeugen dieser Art zu bewältigen ist. Die zunehmende Vielsprachigkeit von Web-Inhalten resultiert in dem Bedarf an Sprachidentifikations-Software, die Sprache/en von elektronischen Dokumenten zwecks gezielter Weiterverarbeitung identifiziert. Solche Sprachidentifizierer können beispielsweise effektiv im Bereich des Multilingualen Information Retrieval eingesetzt werden, da auf den Sprachidentifikationsergebnissen Prozesse der automatischen Indexbildung wie Stemming, Stoppwörterextraktion etc. aufbauen. In der vorliegenden Arbeit wird das neue System "LangIdent" zur Sprachidentifikation von elektronischen Textdokumenten vorgestellt, das in erster Linie für Lehre und Forschung an der Universität Hildesheim verwendet werden soll. "LangIdent" enthält eine Auswahl von gängigen Algorithmen zu der monolingualen Sprachidentifikation, die durch den Benutzer interaktiv ausgewählt und eingestellt werden können. Zusätzlich wurde im System ein neuer Algorithmus implementiert, der die Identifikation von Sprachen, in denen ein multilinguales Dokument verfasst ist, ermöglicht. Die Identifikation beschränkt sich nicht nur auf eine Aufzählung von gefundenen Sprachen, vielmehr wird der Text in monolinguale Abschnitte aufgeteilt, jeweils mit der Angabe der identifizierten Sprache.
  8. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.03
    0.02657212 = product of:
      0.10628848 = sum of:
        0.08729407 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
          0.08729407 = score(doc=4184,freq=14.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.6677857 = fieldWeight in 4184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.018994406 = product of:
          0.03798881 = sum of:
            0.03798881 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.03798881 = score(doc=4184,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  9. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.03
    0.026170913 = product of:
      0.069789104 = sum of:
        0.022883795 = weight(_text_:world in 5089) [ClassicSimilarity], result of:
          0.022883795 = score(doc=5089,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.14863437 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.030408278 = weight(_text_:wide in 5089) [ClassicSimilarity], result of:
          0.030408278 = score(doc=5089,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.171337 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.016497029 = weight(_text_:web in 5089) [ClassicSimilarity], result of:
          0.016497029 = score(doc=5089,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.12619963 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.375 = coord(3/8)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  10. Thelwall, M.; Price, L.: Language evolution and the spread of ideas on the Web : a procedure for identifying emergent hybrid word (2006) 0.03
    0.025277987 = product of:
      0.10111195 = sum of:
        0.052128475 = weight(_text_:wide in 5896) [ClassicSimilarity], result of:
          0.052128475 = score(doc=5896,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.29372054 = fieldWeight in 5896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
        0.048983473 = weight(_text_:web in 5896) [ClassicSimilarity], result of:
          0.048983473 = score(doc=5896,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.37471575 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
      0.25 = coord(2/8)
    
    Abstract
    Word usage is of interest to linguists for its own sake as well as to social scientists and others who seek to track the spread of ideas, for example, in public debates over political decisions. The historical evolution of language can be analyzed with the tools of corpus linguistics through evolving corpora and the Web. But word usage statistics can only be gathered for known words. In this article, techniques are described and tested for identifying new words from the Web, focusing on the case when the words are related to a topic and have a hybrid form with a common sequence of letters. The results highlight the need to employ a combination of search techniques and show the wide potential of hybrid word family investigations in linguistics and social science.
  11. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.02
    0.022159223 = product of:
      0.08863689 = sum of:
        0.034752317 = weight(_text_:wide in 3670) [ClassicSimilarity], result of:
          0.034752317 = score(doc=3670,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.1958137 = fieldWeight in 3670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3670)
        0.053884573 = product of:
          0.10776915 = sum of:
            0.10776915 = weight(_text_:aufsatzsammlung in 3670) [ClassicSimilarity], result of:
              0.10776915 = score(doc=3670,freq=4.0), product of:
                0.26280797 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.040055543 = queryNorm
                0.41006804 = fieldWeight in 3670, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The concept of semantic roles has been central to linguistic theory for many decades. More specifically, the assumption of such representations as mediators in the correspondence between a linguistic form and its associated meaning has helped to address a number of critical issues related to grammatical phenomena. Furthermore, in addition to featuring in all major theories of grammar, semantic (or 'thematic') roles have been referred to extensively within a wide range of other linguistic subdisciplines, including language typology and psycho-/neurolinguistics. This volume brings together insights from these different perspectives and thereby, for the first time, seeks to build upon the obvious potential for cross-fertilisation between hitherto autonomous approaches to a common theme. To this end, a view on semantic roles is adopted that goes beyond the mere assumption of generalised roles, but also focuses on their hierarchical organisation. The book is thus centred around the interdisciplinary examination of how these hierarchical dependencies subserve argument linking - both in terms of linguistic theory and with respect to real-time language processing - and how they interact with other information types in this process. Furthermore, the contributions examine the interaction between the role hierarchy and the conceptual content of (generalised) semantic roles and investigate their cross-linguistic applicability and psychological reality, as well as their explanatory potential in accounting for phenomena in the domain of language disorders. In bridging the gap between different disciplines, the book provides a valuable overview of current thought on semantic roles and argument linking, and may further serve as a point of departure for future interdisciplinary research in this area. As such, it will be of interest to scientists and advanced students in all domains of linguistics and cognitive science.
    RSWK
    Thematische Relation / Aufsatzsammlung (BVB)
    Subject
    Thematische Relation / Aufsatzsammlung (BVB)
  12. Informationslinguistische Texterschließung (1986) 0.02
    0.020623393 = product of:
      0.16498715 = sum of:
        0.16498715 = product of:
          0.3299743 = sum of:
            0.3299743 = weight(_text_:aufsatzsammlung in 186) [ClassicSimilarity], result of:
              0.3299743 = score(doc=186,freq=24.0), product of:
                0.26280797 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.040055543 = queryNorm
                1.2555718 = fieldWeight in 186, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=186)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    RSWK
    Information Retrieval / Aufsatzsammlung (DNB)
    Automatische Sprachanalyse / Morphologie / Aufsatzsammlung (SBB / GBV)
    Automatische Sprachanalyse / Morphologie <Linguistik> / Aufsatzsammlung (DNB)
    Linguistische Datenverarbeitung / Linguistik / Aufsatzsammlung (SWB)
    Linguistik / Information Retrieval / Aufsatzsammlung (SWB / BVB)
    Linguistische Datenverarbeitung / Textanalyse / Aufsatzsammlung (BVB)
    Subject
    Information Retrieval / Aufsatzsammlung (DNB)
    Automatische Sprachanalyse / Morphologie / Aufsatzsammlung (SBB / GBV)
    Automatische Sprachanalyse / Morphologie <Linguistik> / Aufsatzsammlung (DNB)
    Linguistische Datenverarbeitung / Linguistik / Aufsatzsammlung (SWB)
    Linguistik / Information Retrieval / Aufsatzsammlung (SWB / BVB)
    Linguistische Datenverarbeitung / Textanalyse / Aufsatzsammlung (BVB)
  13. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.02
    0.020230787 = product of:
      0.1618463 = sum of:
        0.1618463 = sum of:
          0.13471143 = weight(_text_:aufsatzsammlung in 190) [ClassicSimilarity], result of:
            0.13471143 = score(doc=190,freq=4.0), product of:
              0.26280797 = queryWeight, product of:
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.040055543 = queryNorm
              0.51258504 = fieldWeight in 190, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
          0.027134866 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.027134866 = score(doc=190,freq=2.0), product of:
              0.14026769 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040055543 = queryNorm
              0.19345059 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
      0.125 = coord(1/8)
    
    Date
    14. 4.2007 10:04:22
    RSWK
    Computer / Anwendung / Computerunterstützte Lexikographie / Aufsatzsammlung
    Subject
    Computer / Anwendung / Computerunterstützte Lexikographie / Aufsatzsammlung
  14. Information und Sprache : Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen (2006) 0.02
    0.018740926 = product of:
      0.074963704 = sum of:
        0.02107913 = weight(_text_:web in 91) [ClassicSimilarity], result of:
          0.02107913 = score(doc=91,freq=10.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.16125198 = fieldWeight in 91, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=91)
        0.053884573 = product of:
          0.10776915 = sum of:
            0.10776915 = weight(_text_:aufsatzsammlung in 91) [ClassicSimilarity], result of:
              0.10776915 = score(doc=91,freq=16.0), product of:
                0.26280797 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.040055543 = queryNorm
                0.41006804 = fieldWeight in 91, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.015625 = fieldNorm(doc=91)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    Inhalt: Information und Sprache und mehr - eine Einleitung - Information und Kommunikation Wolf Rauch: Auch Information ist eine Tochter der Zeit Winfried Lenders: Information und kulturelles Gedächtnis Rainer Hammwöhner: Anmerkungen zur Grundlegung der Informationsethik Hans W. Giessen: Ehrwürdig stille Informationen Gernot Wersig: Vereinheitlichte Medientheorie und ihre Sicht auf das Internet Johann Haller, Anja Rütten: Informationswissenschaft und Translationswissenschaft: Spielarten oder Schwestern? Rainer Kuhlen: In Richtung Summarizing für Diskurse in K3 Werner Schweibenz: Sprache, Information und Bedeutung im Museum. Narrative Vermittlung durch Storytelling - Sprache und Computer, insbesondere Information Retrieval und Automatische Indexierung Manfred Thiel: Bedingt wahrscheinliche Syntaxbäume Jürgen Krause: Shell Model, Semantic Web and Web Information Retrieval Elisabeth Niggemann: Wer suchet, der findet? Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek Christa Womser-Hacker: Zur Rolle von Eigennamen im Cross-Language Information Retrieval Klaus-Dirk Schmitz: Wörterbuch, Thesaurus, Terminologie, Ontologie. Was tragen Terminologiewissenschaft und Informationswissenschaft zur Wissensordnung bei?
    Footnote
    In Thesauri, Semantische Netze, Frames, Topic Maps, Taxonomien, Ontologien - begriffliche Verwirrung oder konzeptionelle Vielfalt? (S. 139-151) gibt Jiri Panyr (München/Saarbrücken) eine gut lesbare und nützliche Übersicht über die im Titel des Beitrags genannten semantischen Repräsentationsformen, die im Zusammenhang mit dem Internet und insbesondere mit dem vorgeschlagenen Semantic Web immer wieder - und zwar häufig unpräzise oder gar unrichtig - Anwendung finden. Insbesondere die Ausführungen zum Modebegriff Ontologie zeigen, dass dieser nicht leichtfertig als Quasi-Synonym zu Thesaurus oder Klassifikation verwendet werden darf. Panyrs Beitrag ist übrigens thematisch verwandt mit jenem von K.-D. Schmitz (Köln), Wörterbuch, Thesaurus, Terminologie, Ontologie (S. 129-137). Abgesehen von dem einfallslosen Titel Wer suchet, der findet? (S. 107- 118) - zum Glück mit dem Untertitel Verbesserung der inhaltlichen Suchmöglichkeiten im Informationssystem Der Deutschen Bibliothek versehen - handelt es sich bei diesem Artikel von Elisabeth Niggemann (Frankfurt am Main) zwar um keinen wissenschaftlichen, doch sicherlich den praktischsten, lesbarsten und aus bibliothekarischer Sicht interessantesten des Buches. Niggemann gibt einen Überblick über die bisherige sachliche Erschliessung der bibliographischen Daten der inzwischen zur Deutschen Nationalbibliothek mutierten DDB, sowie einen Statusbericht nebst Ausblick über gegenwärtige bzw. geplante Verbesserungen der inhaltlichen Suche. Dazu zählen der breite Einsatz eines automatischen Indexierungsverfahrens (MILOS/IDX) ebenso wie Aktivitäten im klassifikatorischen Bereich (DDC), die Vernetzung nationaler Schlagwortsysteme (Projekt MACS) sowie die Beschäftigung mit Crosskonkordanzen (CARMEN) und Ansätzen zur Heterogenitätsbehandlung. Das hier von zentraler Stelle deklarierte "commitment" hinsichtlich der Verbesserung der sachlichen Erschließung des nationalen Online-Informationssystems erfüllt den eher nur Kleinmut und Gleichgültigkeit gewohnten phäakischen Beobachter mit Respekt und wehmutsvollem Neid.
    Mit automatischer Indexierung beschäftigen sich auch zwei weitere Beiträge. Indexieren mit AUTINDEX von H.-D. Mass (Saarbrücken) ist leider knapp und ohne didaktische Ambition verfasst, sodass man sich nicht wirklich vorstellen kann, wie dieses System funktioniert. Übersichtlicher stellt sich der Werkstattbericht Automatische Indexierung des Reallexikons zur deutschen Kunstgeschichte von K. Lepsky (Köln) dar, der zeigt, welche Probleme und Schritte bei der Digitalisierung, Indexierung und Web-Präsentation der Volltexte eines grossen fachlichen Nachschlagewerkes anfallen. Weitere interessante Beiträge befassen sich z.B. mit Summarizing-Leistungen im Rahmen eines e-Learning-Projektes (R. Kuhlen), mit dem Schalenmodell und dem Semantischen Web (J. Krause; aus nicht näher dargelegten Gründen in englischer Sprache) und mit der Akkreditierung/ Evaluierung von Hochschullehre und -forschung in Großbritannien (T. Seeger). In Summe liegt hier eine würdige Festschrift vor, über die sich der Gefeierte sicherlich gefreut haben wird. Für informationswissenschaftliche Spezialsammlungen und größere Bibliotheken ist der Band allemal eine Bereicherung. Ein Wermutstropfen aber doch: Obzwar mit Information und Sprache ein optisch ansprechend gestaltetes Buch produziert wurde, enthüllt eine nähere Betrachtung leider allzu viele Druckfehler, mangelhafte Worttrennungen, unkorrigierte grammatikalische Fehler, sowie auch Inkonsistenzen bei Kursivdruck und Satzzeichen. Lektoren und Korrektoren sind, so muss man wieder einmal schmerzlich zur Kenntnis nehmen, ein aussterbender Berufsstand."
    RSWK
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Information Retrieval / Aufsatzsammlung
    Automatische Indexierung / Aufsatzsammlung
    Linguistische Datenverarbeitung / Aufsatzsammlung
    Subject
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Information Retrieval / Aufsatzsammlung
    Automatische Indexierung / Aufsatzsammlung
    Linguistische Datenverarbeitung / Aufsatzsammlung
  15. Wright, S.E.: Leveraging terminology resources across application boundaries : accessing resources in future integrated environments (2000) 0.02
    0.017449858 = product of:
      0.06979943 = sum of:
        0.046232246 = weight(_text_:world in 5528) [ClassicSimilarity], result of:
          0.046232246 = score(doc=5528,freq=4.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.30028677 = fieldWeight in 5528, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5528)
        0.023567183 = weight(_text_:web in 5528) [ClassicSimilarity], result of:
          0.023567183 = score(doc=5528,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.18028519 = fieldWeight in 5528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5528)
      0.25 = coord(2/8)
    
    Abstract
    The title for this conference, stated in English, is Language Technology for a Dynamic Economy - y in the Media Age - The question arises as to what the media are we are dealing with and to what extent we are moving away from tile reality of different media to a world in which all sub-categories flow together into a unified stream of information that is constantly resealed to appear in different hardware configurations. A few years ago, people who were interested in sharing data or getting different electronic "boxes" to talk to each other were focused on two major aspects: I ) developing data conversion technology, and 2) convincing potential users that sharing information was an even remotely interesting option. Although some content "owners" are still reticent about releasing their data, it has become dramatically apparent in the Web environment that a broad range of users does indeed want this technology. Even as researchers struggle with the remaining technical, legal, and ethical impediments that stand in the way of unlimited information access to existing multi-platform resources, the future view of the world will no longer be as obsessed with conversion capability as it will be with creating content, with ,in eye to morphing technologies that will enable the delivery of that content from ail open-standards-based format such as XML (eXtensibic Markup Language), MPEG (Moving Picture Experts Group), or WAP (Wireless Application Protocol) to a rich variety of display Options
  16. Symonds, M.; Bruza, P.; Zuccon, G.; Koopman, B.; Sitbon, L.; Turner, I.: Automatic query expansion : a structural linguistic perspective (2014) 0.02
    0.016751895 = product of:
      0.06700758 = sum of:
        0.043440398 = weight(_text_:wide in 1338) [ClassicSimilarity], result of:
          0.043440398 = score(doc=1338,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.24476713 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
        0.023567183 = weight(_text_:web in 1338) [ClassicSimilarity], result of:
          0.023567183 = score(doc=1338,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.18028519 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1338)
      0.25 = coord(2/8)
    
    Abstract
    A user's query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques model syntagmatic associations that infer two terms co-occur more often than by chance in natural language. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches to query expansion and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process improves retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
  17. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.02
    0.016190499 = product of:
      0.064761996 = sum of:
        0.04576759 = weight(_text_:world in 1361) [ClassicSimilarity], result of:
          0.04576759 = score(doc=1361,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.29726875 = fieldWeight in 1361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1361)
        0.018994406 = product of:
          0.03798881 = sum of:
            0.03798881 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
              0.03798881 = score(doc=1361,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.2708308 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  18. Olsen, K.A.; Williams, J.G.: Spelling and grammar checking using the Web as a text repository (2004) 0.02
    0.0159651 = product of:
      0.0638604 = sum of:
        0.026152909 = weight(_text_:world in 2891) [ClassicSimilarity], result of:
          0.026152909 = score(doc=2891,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.16986786 = fieldWeight in 2891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2891)
        0.037707493 = weight(_text_:web in 2891) [ClassicSimilarity], result of:
          0.037707493 = score(doc=2891,freq=8.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.2884563 = fieldWeight in 2891, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2891)
      0.25 = coord(2/8)
    
    Abstract
    Natural languages are both complex and dynamic. They are in part formalized through dictionaries and grammar. Dictionaries attempt to provide definitions and examples of various usages for all the words in a language. Grammar, on the other hand, is the system of rules that defines the structure of a language and is concerned with the correct use and application of the language in speaking or writing. The fact that these two mechanisms lag behind the language as currently used is not a serious problem for those living in a language culture and talking their native language. However, the correct choice of words, expressions, and word relationships is much more difficult when speaking or writing in a foreign language. The basics of the grammar of a language may have been learned in school decades ago, and even then there were always several choices for the correct expression for an idea, fact, opinion, or emotion. Although many different parts of speech and their relationships can make for difficult language decisions, prepositions tend to be problematic for nonnative speakers of English, and, in reality, prepositions are a major problem in most languages. Does a speaker or writer say "in the West Coast" or "on the West Coast," or perhaps "at the West Coast"? In Norwegian, we are "in" a city, but "at" a place. But the distinction between cities and places is vague. To be absolutely correct, one really has to learn the right preposition for every single place. A simplistic way of resolving these language issues is to ask a native speaker. But even native speakers may disagree about the right choice of words. If there is disagreement, then one will have to ask more than one native speaker, treat his/her response as a vote for a particular choice, and perhaps choose the majority choice as the best possible alternative. In real life, such a procedure may be impossible or impractical, but in the electronic world, as we shall see, this is quite easy to achieve. Using the vast text repository of the Web, we may get a significant voting base for even the most detailed and distinct phrases. We shall start by introducing a set of examples to present our idea of using the text repository an the Web to aid in making the best word selection, especially for the use of prepositions. Then we will present a more general discussion of the possibilities and limitations of using the Web as an aid for correct writing.
  19. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.015001701 = product of:
      0.060006805 = sum of:
        0.04081956 = weight(_text_:web in 2541) [ClassicSimilarity], result of:
          0.04081956 = score(doc=2541,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.3122631 = fieldWeight in 2541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.019187247 = product of:
          0.038374495 = sum of:
            0.038374495 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.038374495 = score(doc=2541,freq=4.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  20. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.014068939 = product of:
      0.056275755 = sum of:
        0.039994836 = weight(_text_:web in 4436) [ClassicSimilarity], result of:
          0.039994836 = score(doc=4436,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.3059541 = fieldWeight in 4436, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.01628092 = product of:
          0.03256184 = sum of:
            0.03256184 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.03256184 = score(doc=4436,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39

Years

Languages

  • e 102
  • d 27
  • m 2
  • More… Less…

Types

  • a 93
  • el 21
  • m 19
  • s 11
  • p 4
  • x 4
  • d 1
  • More… Less…

Subjects

Classifications