Search (500 results, page 24 of 25)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.00
    0.0013163039 = product of:
      0.0026326077 = sum of:
        0.0026326077 = product of:
          0.0052652154 = sum of:
            0.0052652154 = weight(_text_:a in 849) [ClassicSimilarity], result of:
              0.0052652154 = score(doc=849,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.11032722 = fieldWeight in 849, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  2. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.00
    0.0013163039 = product of:
      0.0026326077 = sum of:
        0.0026326077 = product of:
          0.0052652154 = sum of:
            0.0052652154 = weight(_text_:a in 990) [ClassicSimilarity], result of:
              0.0052652154 = score(doc=990,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.11032722 = fieldWeight in 990, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
    Type
    a
  3. French, J.C.; Powell, A.L.; Schulman, E.: Using clustering strategies for creating authority files (2000) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 4811) [ClassicSimilarity], result of:
              0.005158836 = score(doc=4811,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 4811, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4811)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As more online databases are integrated into digital libraries, the issue of quality control of the data becomes increasingly important, especially as it relates to the effective retrieval of information. Authority work, the need to discover and reconcile variant forms of strings in bibliographical entries, will become more critical in the future. Spelling variants, misspellings, and transliteration differences will all increase the difficulty of retrieving information. We investigate a number of approximate string matching techniques that have traditionally been used to help with this problem. We then introduce the notion of approximate word matching and show how it can be used to improve detection and categorization of variant forms. We demonstrate the utility of these approaches using data from the Astrophysics Data System and show how we can reduce the human effort involved in the creation of authority files
    Type
    a
  4. Mustafa el Hadi, W.: Automatic term recognition & extraction tools : examining the new interfaces and their effective communication role in LSP discourse (1998) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 67) [ClassicSimilarity], result of:
              0.005158836 = score(doc=67,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 67, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we will discuss the possibility of reorienting NLP (Natural Language Processing) systems towards the extraction, not only of terms and their semantic relations, but also towards a variety of other uses; the storage, accessing and retrieving of Language for Special Purposes (LSPZ-20) lexical combinations, the provision of contexts and other information on terms through the integration of more interfaces to terminological data-bases, term managing systems and existing NLP systems. The aim of making such interfaces available is to increase the efficiency of the systems and improve the terminology-oriented text analysis. Since automatic term extraction is the backbone of many applications such as machine translation (MT), indexing, technical writing, thesaurus construction and knowledge representation developments in this area will have asignificant impact
    Type
    a
  5. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing for a text mining environment : An NP recognition model for knowledge visualization and information retrieval (2002) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 1852) [ClassicSimilarity], result of:
              0.005158836 = score(doc=1852,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 1852, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. L'Homme, D.; L'Homme, M.-C.; Lemay, C.: Benchmarking the performance of two Part-of-Speech (POS) taggers for terminological purposes (2002) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 1855) [ClassicSimilarity], result of:
              0.005158836 = score(doc=1855,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 1855, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1855)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Part-of-Speech (POS) taggers are used in an increasing number of terminology applications. However, terminologists do not know exactly how they perform an specialized texts since most POS taggers have been trained an "general" Corpora, that is, Corpora containing all sorts of undifferentiated texts. In this article, we evaluate the Performance of two POS taggers an French and English medical texts. The taggers are TnT (a statistical tagger developed at Saarland University (Brants 2000)) and WinBrill (the Windows version of the tagger initially developed by Eric Brill (1992)). Ten extracts from medical texts were submitted to the taggers and the outputs scanned manually. Results pertain to the accuracy of tagging in terms of correctly and incorrectly tagged words. We also study the handling of unknown words from different viewpoints.
    Type
    a
  7. Martínez, F.; Martín, M.T.; Rivas, V.M.; Díaz, M.C.; Ureña, L.A.: Using neural networks for multiword recognition in IR (2003) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 2777) [ClassicSimilarity], result of:
              0.005158836 = score(doc=2777,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 2777, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2777)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, a supervised neural network has been used to classify pairs of terms as being multiwords or non-multiwords. Classification is based an the values yielded by different estimators, currently available in literature, used as inputs for the neural network. Lists of multiwords and non-multiwords have been built to train the net. Afterward, many other pairs of terms have been classified using the trained net. Results obtained in this classification have been used to perform information retrieval tasks. Experiments show that detecting multiwords results in better performance of the IR methods.
    Type
    a
  8. Fautsch, C.; Savoy, J.: Algorithmic stemmers or morphological analysis? : an evaluation (2009) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 2950) [ClassicSimilarity], result of:
              0.005158836 = score(doc=2950,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 2950, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2950)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is important in information retrieval (IR), information extraction, or classification tasks that morphologically related forms are conflated under the same stem (using stemmer) or lemma (using morphological analyzer). To achieve this for the English language, algorithmic stemming or various morphological analysis approaches have been suggested. Based on Cross-Language Evaluation Forum test collections containing 284 queries and various IR models, this article evaluates these word-normalization proposals. Stemming improves the mean average precision significantly by around 7% while performance differences are not significant when comparing various algorithmic stemmers or algorithmic stemmers and morphological analysis. Accounting for thesaurus class numbers during indexing does not modify overall retrieval performances. Finally, we demonstrate that including a stop word list, even one containing only around 10 terms, might significantly improve retrieval performance, depending on the IR model.
    Type
    a
  9. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 3296) [ClassicSimilarity], result of:
              0.005158836 = score(doc=3296,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 3296, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    User-generated content on the Web has become an extremely valuable source for mining and analyzing user opinions on any topic. Recent years have seen an increasing body of work investigating methods to recognize favorable and unfavorable sentiments toward specific subjects from online text. However, most of these efforts focus on English and there have been very few studies on sentiment analysis of Chinese content. This paper aims to address the unique challenges posed by Chinese sentiment analysis. We propose a rule-based approach including two phases: (1) determining each sentence's sentiment based on word dependency, and (2) aggregating sentences to predict the document sentiment. We report the results of an experimental study comparing our approach with three machine learning-based approaches using two sets of Chinese articles. These results illustrate the effectiveness of our proposed method and its advantages against learning-based approaches.
    Type
    a
  10. Farreús, M.; Costa-jussà, M.R.; Popovic' Morse, M.: Study and correlation analysis of linguistic, perceptual, and automatic machine translation evaluations (2012) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 4975) [ClassicSimilarity], result of:
              0.005158836 = score(doc=4975,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 4975, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4975)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Evaluation of machine translation output is an important task. Various human evaluation techniques as well as automatic metrics have been proposed and investigated in the last decade. However, very few evaluation methods take the linguistic aspect into account. In this article, we use an objective evaluation method for machine translation output that classifies all translation errors into one of the five following linguistic levels: orthographic, morphological, lexical, semantic, and syntactic. Linguistic guidelines for the target language are required, and human evaluators use them in to classify the output errors. The experiments are performed on English-to-Catalan and Spanish-to-Catalan translation outputs generated by four different systems: 2 rule-based and 2 statistical. All translations are evaluated using the 3 following methods: a standard human perceptual evaluation method, several widely used automatic metrics, and the human linguistic evaluation. Pearson and Spearman correlation coefficients between the linguistic, perceptual, and automatic results are then calculated, showing that the semantic level correlates significantly with both perceptual evaluation and automatic metrics.
    Type
    a
  11. Xinglin, L.: Automatic summarization method based on compound word recognition (2015) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 1841) [ClassicSimilarity], result of:
              0.005158836 = score(doc=1841,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 1841, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1841)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After analyzing main methods of automatic summarization today, we find they all ignore the weight of unknown words in the sentence. In order to overcome this problem, a method for automatic summarization based on compound word recognition is proposed. According to this method, the compound word in the text was identified and the segmentation word was corrected at first. Then, keyword set was extracted from Chinese documents and the sentence weights were calculated according to the weights of the keyword set. Because the weight of compound words was calculated by different weight calculation formula, the corresponding total weight of each sentence will be determined. Finally, sentences with higher weight which will be outputted to make up the summarization sentences by original order were selected by percentage. Experiments were conducted on HIT IR-lab Text Summarization Corpus, the results show that the precision can be achieved 76.51% by the proposed method, and we can conclude that the method is applicable for automatic summarization and the effect is good.
    Type
    a
  12. Costa-jussà, M.R.: How much hybridization does machine translation need? (2015) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 2227) [ClassicSimilarity], result of:
              0.005158836 = score(doc=2227,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 2227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Rule-based and corpus-based machine translation (MT) have coexisted for more than 20 years. Recently, boundaries between the two paradigms have narrowed and hybrid approaches are gaining interest from both academia and businesses. However, since hybrid approaches involve the multidisciplinary interaction of linguists, computer scientists, engineers, and information specialists, understandably a number of issues exist. While statistical methods currently dominate research work in MT, most commercial MT systems are technically hybrid systems. The research community should investigate the benefits and questions surrounding the hybridization of MT systems more actively. This paper discusses various issues related to hybrid MT including its origins, architectures, achievements, and frustrations experienced in the community. It can be said that both rule-based and corpus- based MT systems have benefited from hybridization when effectively integrated. In fact, many of the current rule/corpus-based MT approaches are already hybridized since they do include statistics/rules at some point.
    Type
    a
  13. Fernández, R.T.; Losada, D.E.: Effective sentence retrieval based on query-independent evidence (2012) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 2728) [ClassicSimilarity], result of:
              0.005158836 = score(doc=2728,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 2728, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2728)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we propose an effective sentence retrieval method that consists of incorporating query-independent features into standard sentence retrieval models. To meet this aim, we apply a formal methodology and consider different query-independent features. In particular, we show that opinion-based features are promising. Opinion mining is an increasingly important research topic but little is known about how to improve retrieval algorithms with opinion-based components. In this respect, we consider here different kinds of opinion-based features to act as query-independent evidence and study whether this incorporation improves retrieval performance. On the other hand, information needs are usually related to people, locations or organizations. We hypothesize here that using these named entities as query-independent features may also improve the sentence relevance estimation. Finally, the length of the retrieval unit has been shown to be an important component in different retrieval scenarios. We therefore include length-based features in our study. Our evaluation demonstrates that, either in isolation or in combination, these query-independent features help to improve substantially the performance of state-of-the-art sentence retrieval methods.
    Type
    a
  14. Shree, P.: ¬The journey of Open AI GPT models (2020) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 869) [ClassicSimilarity], result of:
              0.005158836 = score(doc=869,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 869, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Generative Pre-trained Transformer (GPT) models by OpenAI have taken natural language processing (NLP) community by storm by introducing very powerful language models. These models can perform various NLP tasks like question answering, textual entailment, text summarisation etc. without any supervised training. These language models need very few to no examples to understand the tasks and perform equivalent or even better than the state-of-the-art models trained in supervised fashion. In this article we will cover the journey of these models and understand how they have evolved over a period of 2 years. 1. Discussion of GPT-1 paper (Improving Language Understanding by Generative Pre-training). 2. Discussion of GPT-2 paper (Language Models are unsupervised multitask learners) and its subsequent improvements over GPT-1. 3. Discussion of GPT-3 paper (Language models are few shot learners) and the improvements which have made it one of the most powerful models NLP has seen till date. This article assumes familiarity with the basics of NLP terminologies and transformer architecture.
    Type
    a
  15. Garfield, E.: ¬The relationship between mechanical indexing, structural linguistics and information retrieval (1992) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 3632) [ClassicSimilarity], result of:
              0.004863797 = score(doc=3632,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 3632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3632)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Derrington, S.: MT - myth, muddle or reality? (1994) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 7047) [ClassicSimilarity], result of:
              0.004863797 = score(doc=7047,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 7047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7047)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  17. Paice, C.D.: Method for evaluation of stemming algorithms based on error counting (1996) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 5799) [ClassicSimilarity], result of:
              0.004863797 = score(doc=5799,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 5799, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5799)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Mustafa el Hadi, W.: ¬The contribution of terminology to the theoretical conception of classificatory languages and document indexing (1990) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 5273) [ClassicSimilarity], result of:
              0.004863797 = score(doc=5273,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Addison, E.R.; Wilson, H.D.; Feder, J.: ¬The impact of plain English searching on end users (1993) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 5354) [ClassicSimilarity], result of:
              0.004863797 = score(doc=5354,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 5354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5354)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Conceptual structures : theory, tools and applications. 6th International Conference on Conceptual Structures, ICCS'98, Montpellier, France, August, 10-12, 1998, Proceedings (1998) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 1378) [ClassicSimilarity], result of:
              0.004863797 = score(doc=1378,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 1378, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1378)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book constitutes the refereed proceedings of the 6th International Conference on Conceptual Structures, ICCS'98, held in Montpellier, France, in August 1998. The 20 revised full papers and 10 research reports presented were carefully selected from a total of 66 submissions; also included are three invited contributions. The volume is divided in topical sections on knowledge representation and knowledge engineering, tools, conceptual graphs and other models, relationships with logics, algorithms and complexity, natural language processing, and applications.

Languages

Types

  • a 433
  • el 45
  • m 33
  • s 19
  • p 7
  • x 5
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications