Search (104 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07722542 = product of:
      0.11583812 = sum of:
        0.09223415 = product of:
          0.27670243 = sum of:
            0.27670243 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.27670243 = score(doc=562,freq=2.0), product of:
                0.4923373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05807226 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.023603968 = product of:
          0.047207937 = sum of:
            0.047207937 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.047207937 = score(doc=562,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. AI-Sughaiyer, I.A.; AI-Kharashi, I.A.: Arabic morphological analysis techniques : a comprehensive survey (2004) 0.05
    0.048905186 = product of:
      0.14671555 = sum of:
        0.14671555 = weight(_text_:systematic in 2206) [ClassicSimilarity], result of:
          0.14671555 = score(doc=2206,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2206)
      0.33333334 = coord(1/3)
    
    Abstract
    After several decades of heavy research activity an English stemmers, Arabic morphological analysis techniques have become a popular area of research. The Arabic language is one of the Semitic languages; it exhibits a very systematic but complex morphological structure based an root-pattern schemes. As a consequence, survey of such techniques proves to be more necessary. The aim of this paper is to summarize and organize the information available in the literature in an attempt to motivate researchers to look into these techniques and try to develop more advanced ones. This paper introduces, classifies, and surveys Arabic morphological analysis techniques. Furthermore, conclusions, open areas, and future directions are provided at the end.
  3. Kuo, J.-S.; Li, H.; Yang, Y.-K.: Active learning for constructing transliteration lexicons from the Web (2008) 0.05
    0.048905186 = product of:
      0.14671555 = sum of:
        0.14671555 = weight(_text_:systematic in 1345) [ClassicSimilarity], result of:
          0.14671555 = score(doc=1345,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.44203353 = fieldWeight in 1345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1345)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents an adaptive learning framework for Phonetic Similarity Modeling (PSM) that supports the automatic construction of transliteration lexicons. The learning algorithm starts with minimum prior knowledge about machine transliteration and acquires knowledge iteratively from the Web. We study the unsupervised learning and the active learning strategies that minimize human supervision in terms of data labeling. The learning process refines the PSM and constructs a transliteration lexicon at the same time. We evaluate the proposed PSM and its learning algorithm through a series of systematic experiments, which show that the proposed framework is reliably effective on two independent databases.
  4. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.04
    0.040294997 = product of:
      0.120884985 = sum of:
        0.120884985 = sum of:
          0.065809056 = weight(_text_:indexing in 1361) [ClassicSimilarity], result of:
            0.065809056 = score(doc=1361,freq=2.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.29604656 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.05507593 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.05507593 = score(doc=1361,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.33333334 = coord(1/3)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  5. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.03
    0.034932278 = product of:
      0.104796834 = sum of:
        0.104796834 = weight(_text_:systematic in 849) [ClassicSimilarity], result of:
          0.104796834 = score(doc=849,freq=2.0), product of:
            0.33191046 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.05807226 = queryNorm
            0.31573826 = fieldWeight in 849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=849)
      0.33333334 = coord(1/3)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  6. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.03
    0.034538567 = product of:
      0.1036157 = sum of:
        0.1036157 = sum of:
          0.056407765 = weight(_text_:indexing in 75) [ClassicSimilarity], result of:
            0.056407765 = score(doc=75,freq=2.0), product of:
              0.22229293 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.05807226 = queryNorm
              0.2537542 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
          0.047207937 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
            0.047207937 = score(doc=75,freq=2.0), product of:
              0.20335917 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05807226 = queryNorm
              0.23214069 = fieldWeight in 75, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=75)
      0.33333334 = coord(1/3)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  7. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.03
    0.030905299 = product of:
      0.04635795 = sum of:
        0.029905684 = product of:
          0.08971705 = sum of:
            0.08971705 = weight(_text_:objects in 3645) [ClassicSimilarity], result of:
              0.08971705 = score(doc=3645,freq=4.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.29066795 = fieldWeight in 3645, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.33333334 = coord(1/3)
        0.016452264 = product of:
          0.032904528 = sum of:
            0.032904528 = weight(_text_:indexing in 3645) [ClassicSimilarity], result of:
              0.032904528 = score(doc=3645,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.14802328 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The selection that follows was chosen as it represents "a very early paper an the possibilities allowed by computers an documentation." In the early 1960s computers were being used to provide simple automatic indexing systems wherein keywords were extracted from documents. The problem with such systems was that they lacked vocabulary control, thus documents related in subject matter were not always collocated in retrieval. To improve retrieval by improving recall is the raison d'être of vocabulary control tools such as classifications and thesauri. The question arose whether it was possible by automatic means to construct classes of terms, which when substituted, one for another, could be used to improve retrieval performance? One of the first theoretical approaches to this question was initiated by R. M. Needham and Karen Sparck Jones at the Cambridge Language Research Institute in England.t The question was later pursued using experimental methodologies by Sparck Jones, who, as a Senior Research Associate in the Computer Laboratory at the University of Cambridge, has devoted her life's work to research in information retrieval and automatic naturai language processing. Based an the principles of numerical taxonomy, automatic classification techniques start from the premise that two objects are similar to the degree that they share attributes in common. When these two objects are keywords, their similarity is measured in terms of the number of documents they index in common. Step 1 in automatic classification is to compute mathematically the degree to which two terms are similar. Step 2 is to group together those terms that are "most similar" to each other, forming equivalence classes of intersubstitutable terms. The technique for forming such classes varies and is the factor that characteristically distinguishes different approaches to automatic classification. The technique used by Needham and Sparck Jones, that of clumping, is described in the selection that follows. Questions that must be asked are whether the use of automatically generated classes really does improve retrieval performance and whether there is a true eco nomic advantage in substituting mechanical for manual labor. Several years after her work with clumping, Sparck Jones was to observe that while it was not wholly satisfactory in itself, it was valuable in that it stimulated research into automatic classification. To this it might be added that it was valuable in that it introduced to libraryl information science the methods of numerical taxonomy, thus stimulating us to think again about the fundamental nature and purpose of classification. In this connection it might be useful to review how automatically derived classes differ from those of manually constructed classifications: 1) the manner of their derivation is purely a posteriori, the ultimate operationalization of the principle of literary warrant; 2) the relationship between members forming such classes is essentially statistical; the members of a given class are similar to each other not because they possess the class-defining characteristic but by virtue of sharing a family resemblance; and finally, 3) automatically derived classes are not related meaningfully one to another, that is, they are not ordered in traditional hierarchical and precedence relationships.
  8. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.030744717 = product of:
      0.09223415 = sum of:
        0.09223415 = product of:
          0.27670243 = sum of:
            0.27670243 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.27670243 = score(doc=862,freq=2.0), product of:
                0.4923373 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05807226 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  9. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.023276996 = product of:
      0.034915492 = sum of:
        0.021146512 = product of:
          0.06343953 = sum of:
            0.06343953 = weight(_text_:objects in 1616) [ClassicSimilarity], result of:
              0.06343953 = score(doc=1616,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.20553327 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.33333334 = coord(1/3)
        0.013768982 = product of:
          0.027537964 = sum of:
            0.027537964 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.027537964 = score(doc=1616,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  10. Lustig, G.: ¬Das Projekt WAI : Wörterbuchentwicklung für automatisches Indexing (1982) 0.02
    0.021936353 = product of:
      0.065809056 = sum of:
        0.065809056 = product of:
          0.13161811 = sum of:
            0.13161811 = weight(_text_:indexing in 33) [ClassicSimilarity], result of:
              0.13161811 = score(doc=33,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5920931 = fieldWeight in 33, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=33)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  11. Warner, A.J.: ¬The role of linguistic analysis in full-text retrieval (1994) 0.02
    0.021936353 = product of:
      0.065809056 = sum of:
        0.065809056 = product of:
          0.13161811 = sum of:
            0.13161811 = weight(_text_:indexing in 2992) [ClassicSimilarity], result of:
              0.13161811 = score(doc=2992,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5920931 = fieldWeight in 2992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2992)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Challenges in indexing electronic text and images. Ed.: R. Fidel et al
  12. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.02
    0.021936353 = product of:
      0.065809056 = sum of:
        0.065809056 = product of:
          0.13161811 = sum of:
            0.13161811 = weight(_text_:indexing in 1139) [ClassicSimilarity], result of:
              0.13161811 = score(doc=1139,freq=8.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5920931 = fieldWeight in 1139, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  13. Garfield, E.: ¬The relationship between mechanical indexing, structural linguistics and information retrieval (1992) 0.02
    0.021711357 = product of:
      0.06513407 = sum of:
        0.06513407 = product of:
          0.13026814 = sum of:
            0.13026814 = weight(_text_:indexing in 3632) [ClassicSimilarity], result of:
              0.13026814 = score(doc=3632,freq=6.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5860202 = fieldWeight in 3632, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3632)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It is possible to locate over 60% of indexing terms used in the Current List of Medical Literature by analysing the titles of the articles. Citation indexes contain 'noise' and lack many pertinent citations. Mechanical indexing or analysis of text must begin with some linguistic technique. Discusses Harris' methods of structural linguistics, discourse analysis and transformational analysis. Provides 3 examples with references, abstracts and index entries
  14. Warner, A.J.: Natural language processing (1987) 0.02
    0.020981308 = product of:
      0.06294392 = sum of:
        0.06294392 = product of:
          0.12588784 = sum of:
            0.12588784 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.12588784 = score(doc=337,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  15. Sokirko, A.V.: Programnaya realizatsiya Russkogo abshchesemanticheskogo slovarya (1997) 0.02
    0.020139536 = product of:
      0.060418606 = sum of:
        0.060418606 = product of:
          0.18125582 = sum of:
            0.18125582 = weight(_text_:objects in 2258) [ClassicSimilarity], result of:
              0.18125582 = score(doc=2258,freq=2.0), product of:
                0.3086582 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05807226 = queryNorm
                0.58723795 = fieldWeight in 2258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2258)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the Dolphi2 for Windows software which has been used for the development of the Russian Semantic Dictionay ROSS. Although not a relational database as such, Dolphi actively uses standard objects of relational databases
  16. Smeaton, A.F.: Progress in the application of natural language processing to information retrieval tasks (1992) 0.02
    0.018802589 = product of:
      0.056407765 = sum of:
        0.056407765 = product of:
          0.11281553 = sum of:
            0.11281553 = weight(_text_:indexing in 7080) [ClassicSimilarity], result of:
              0.11281553 = score(doc=7080,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5075084 = fieldWeight in 7080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7080)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Account of recent developments in automatic and semi-automatic text indexing as well as in the generation of thesauri, text retrieval, abstracting and summarization
  17. Hagn-Meincke, L.L.: Sprogspil pa tvaers : sprogfilosofiske teoriers betydning for indeksering og emnesogning (1999) 0.02
    0.018802589 = product of:
      0.056407765 = sum of:
        0.056407765 = product of:
          0.11281553 = sum of:
            0.11281553 = weight(_text_:indexing in 4643) [ClassicSimilarity], result of:
              0.11281553 = score(doc=4643,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5075084 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. d. Titels: Language-game interferences: the importance of linguistic theories for indexing and subject searching
  18. Godby, C.J.; Reighart, R.R.: ¬The WordSmith Indexing System (2001) 0.02
    0.018802589 = product of:
      0.056407765 = sum of:
        0.056407765 = product of:
          0.11281553 = sum of:
            0.11281553 = weight(_text_:indexing in 1063) [ClassicSimilarity], result of:
              0.11281553 = score(doc=1063,freq=2.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5075084 = fieldWeight in 1063, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1063)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Wright, L.W.; Nardini, H.K.G.; Aronson, A.R.; Rindflesch, T.C.: Hierarchical concept indexing of full-text documents in the Unified Medical Language System Information sources Map (1999) 0.02
    0.018802589 = product of:
      0.056407765 = sum of:
        0.056407765 = product of:
          0.11281553 = sum of:
            0.11281553 = weight(_text_:indexing in 2111) [ClassicSimilarity], result of:
              0.11281553 = score(doc=2111,freq=8.0), product of:
                0.22229293 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5075084 = fieldWeight in 2111, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2111)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Full-text documents are a vital and rapidly growing part of online biomedical information. A single large document can contain as much information as a small database, but normally lacks the tight structure and consistent indexing of a database. Retrieval systems will often miss highly relevant parts of a document if the document as a whole appears irrelevant. Access to full-text information is further complicated by the need to search separately many disparate information resources. This research explores how these problems can be addressed by the combined use of 2 techniques: 1) natural language processing for automatic concept-based indexing of full text, and 2) methods for exploiting the structure and hierarchy of full-text documents. We describe methods for applying these techniques to a large collection of full-text documents drawn from the Health Services / Technology Assessment Text (HSTAT) database at the NLM and examine how this hierarchical concept indexing can assist both document- and source-level retrieval in the context of NLM's Information Source Map project
  20. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.018358644 = product of:
      0.05507593 = sum of:
        0.05507593 = product of:
          0.11015186 = sum of:
            0.11015186 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.11015186 = score(doc=3164,freq=2.0), product of:
                0.20335917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05807226 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248

Years

Languages

  • e 79
  • d 21
  • da 1
  • f 1
  • m 1
  • ru 1
  • More… Less…

Types

  • a 83
  • el 9
  • m 9
  • s 4
  • p 3
  • x 2
  • b 1
  • d 1
  • More… Less…

Classifications