Search (72 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.13
    0.12917596 = product of:
      0.19376393 = sum of:
        0.17005529 = weight(_text_:citation in 156) [ClassicSimilarity], result of:
          0.17005529 = score(doc=156,freq=8.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.725337 = fieldWeight in 156, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.023708638 = product of:
          0.047417276 = sum of:
            0.047417276 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.047417276 = score(doc=156,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06648674 = product of:
      0.099730104 = sum of:
        0.079408415 = product of:
          0.23822524 = sum of:
            0.23822524 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.23822524 = score(doc=562,freq=2.0), product of:
                0.4238747 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04999695 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.02032169 = product of:
          0.04064338 = sum of:
            0.04064338 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.04064338 = score(doc=562,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Levin, M.; Krawczyk, S.; Bethard, S.; Jurafsky, D.: Citation-based bootstrapping for large-scale author disambiguation (2012) 0.05
    0.045268476 = product of:
      0.13580543 = sum of:
        0.13580543 = weight(_text_:citation in 246) [ClassicSimilarity], result of:
          0.13580543 = score(doc=246,freq=10.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.57925105 = fieldWeight in 246, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=246)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first "bootstrap" stage, a collection of high-precision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our self-supervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters' Web of Knowledge with B3 F1 of.807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from.748 to.767) suggests they may not be worth the trouble of extracting from databases that don't already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F1.
  4. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.04
    0.041952223 = product of:
      0.12585667 = sum of:
        0.12585667 = sum of:
          0.07843939 = weight(_text_:reports in 4729) [ClassicSimilarity], result of:
            0.07843939 = score(doc=4729,freq=2.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.34833482 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
          0.047417276 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
            0.047417276 = score(doc=4729,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.2708308 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on an OCLC natural language processing research project to develop methods for identifying terminology in unstructured electronic text, especially material associated with new cultural trends and emerging subjects. Current OCLC production software can only identify single words as indexable terms in full text documents, thus a major goal of the WordSmith project is to develop software that can automatically identify and intelligently organize phrases for uses in database indexes. By analyzing user terminology from local newspapers in the USA, the latest cultural trends and technical developments as well as personal and geographic names have been drawm out. Notes that this new vocabulary can also be mapped into reference works
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  5. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.04
    0.040082417 = product of:
      0.12024725 = sum of:
        0.12024725 = weight(_text_:citation in 2764) [ClassicSimilarity], result of:
          0.12024725 = score(doc=2764,freq=4.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.51289076 = fieldWeight in 2764, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2764)
      0.33333334 = coord(1/3)
    
    Abstract
    The ACL Anthology is a large collection of research papers in computational linguistics. Citation data were obtained using text extraction from a collection of PDF files with significant manual postprocessing performed to clean up the results. Manual annotation of the references was then performed to complete the citation network. We analyzed the networks of paper citations, author citations, and author collaborations in an attempt to identify the most central papers and authors. The analysis includes general network statistics, PageRank, metrics across publication years and venues, the impact factor and h-index, as well as other measures.
  6. Garfield, E.: ¬The relationship between mechanical indexing, structural linguistics and information retrieval (1992) 0.03
    0.032391485 = product of:
      0.09717445 = sum of:
        0.09717445 = weight(_text_:citation in 3632) [ClassicSimilarity], result of:
          0.09717445 = score(doc=3632,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.4144783 = fieldWeight in 3632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0625 = fieldNorm(doc=3632)
      0.33333334 = coord(1/3)
    
    Abstract
    It is possible to locate over 60% of indexing terms used in the Current List of Medical Literature by analysing the titles of the articles. Citation indexes contain 'noise' and lack many pertinent citations. Mechanical indexing or analysis of text must begin with some linguistic technique. Discusses Harris' methods of structural linguistics, discourse analysis and transformational analysis. Provides 3 examples with references, abstracts and index entries
  7. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026469473 = product of:
      0.079408415 = sum of:
        0.079408415 = product of:
          0.23822524 = sum of:
            0.23822524 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23822524 = score(doc=862,freq=2.0), product of:
                0.4238747 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04999695 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  8. Ibekwe-SanJuan, F.; SanJuan, E.: From term variants to research topics (2002) 0.02
    0.020244677 = product of:
      0.06073403 = sum of:
        0.06073403 = weight(_text_:citation in 1853) [ClassicSimilarity], result of:
          0.06073403 = score(doc=1853,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 1853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1853)
      0.33333334 = coord(1/3)
    
    Abstract
    In a scientific and technological watch (STW) task, an expert user needs to survey the evolution of research topics in his area of specialisation in order to detect interesting changes. The majority of methods proposing evaluation metrics (bibliometrics and scientometrics studies) for STW rely solely an statistical data analysis methods (Co-citation analysis, co-word analysis). Such methods usually work an structured databases where the units of analysis (words, keywords) are already attributed to documents by human indexers. The advent of huge amounts of unstructured textual data has rendered necessary the integration of natural language processing (NLP) techniques to first extract meaningful units from texts. We propose a method for STW which is NLP-oriented. The method not only analyses texts linguistically in order to extract terms from them, but also uses linguistic relations (syntactic variations) as the basis for clustering. Terms and variation relations are formalised as weighted di-graphs which the clustering algorithm, CPCL (Classification by Preferential Clustered Link) will seek to reduce in order to produces classes. These classes ideally represent the research topics present in the corpus. The results of the classification are subjected to validation by an expert in STW.
  9. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.02
    0.020244677 = product of:
      0.06073403 = sum of:
        0.06073403 = weight(_text_:citation in 5310) [ClassicSimilarity], result of:
          0.06073403 = score(doc=5310,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 5310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5310)
      0.33333334 = coord(1/3)
    
    Abstract
    The novelty of knowledge claims in a research paper can be considered an evaluation criterion for papers to supplement citations. To provide a foundation for research evaluation from the perspective of innovativeness, we propose an automatic approach for extracting innovative ideas from the abstracts of technology and engineering papers. The approach extracts N-grams as candidates based on part-of-speech tagging and determines whether they are novel by checking the Scopus® database to determine whether they had ever been presented previously. Moreover, we discussed the distributions of innovative ideas in different abstract structures. To improve the performance by excluding noisy N-grams, a list of stopwords and a list of research description characteristics were developed. We selected abstracts of articles published from 2011 to 2017 with the topic of semantic analysis as the experimental texts. Excluding noisy N-grams, considering the distribution of innovative ideas in abstracts, and suitably combining N-grams can effectively improve the performance of automatic innovative idea extraction. Unlike co-word and co-citation analysis, innovative-idea extraction aims to identify the differences in a paper from all previously published papers.
  10. Soni, S.; Lerman, K.; Eisenstein, J.: Follow the leader : documents on the leading edge of semantic change get more citations (2021) 0.02
    0.020244677 = product of:
      0.06073403 = sum of:
        0.06073403 = weight(_text_:citation in 169) [ClassicSimilarity], result of:
          0.06073403 = score(doc=169,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=169)
      0.33333334 = coord(1/3)
    
    Abstract
    Diachronic word embeddings-vector representations of words over time-offer remarkable insights into the evolution of language and provide a tool for quantifying sociocultural change from text documents. Prior work has used such embeddings to identify shifts in the meaning of individual words. However, simply knowing that a word has changed in meaning is insufficient to identify the instances of word usage that convey the historical meaning or the newer meaning. In this study, we link diachronic word embeddings to documents, by situating those documents as leaders or laggards with respect to ongoing semantic changes. Specifically, we propose a novel method to quantify the degree of semantic progressiveness in each word usage, and then show how these usages can be aggregated to obtain scores for each document. We analyze two large collections of documents, representing legal opinions and scientific articles. Documents that are scored as semantically progressive receive a larger number of citations, indicating that they are especially influential. Our work thus provides a new technique for identifying lexical semantic leaders and demonstrates a new link between progressive use of language and influence in a citation network.
  11. Warner, A.J.: Natural language processing (1987) 0.02
    0.018063724 = product of:
      0.054191172 = sum of:
        0.054191172 = product of:
          0.108382344 = sum of:
            0.108382344 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.108382344 = score(doc=337,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  12. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09483455 = score(doc=3164,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09483455 = score(doc=4506,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8.10.2000 11:52:22
  14. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09483455 = score(doc=6672,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  15. New tools for human translators (1997) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09483455 = score(doc=1179,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  16. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09483455 = score(doc=3117,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    28. 2.1999 10:48:22
  17. ¬Der Student aus dem Computer (2023) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09483455 = score(doc=1079,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 1.2023 16:22:55
  18. Sumbatyan, M.A.; Khazagerov, G.G.: Tipy ruskikh omoform i ikx avtomaticheskoe razvedenie (1997) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 2259) [ClassicSimilarity], result of:
              0.08964503 = score(doc=2259,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 2259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2259)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on the development of an algorithm which has been used to compile a comprehensive dictionary of Russian homonyms i.e. words with several meanings. The word 'lay' can serve as an example of an English homonym: it is either a verb in its own right (to lay) or the preterite of the verb 'to lie'. The compiled dictionary has been used to identify the existing individual types of homonyms
  19. Pritchard-Schoch, T.: Comparing natural language retrieval : Win & Freestyle (1995) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 2546) [ClassicSimilarity], result of:
              0.08964503 = score(doc=2546,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 2546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2546)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a comparison of 2 natural language interfaces to full text legal databases: WIN for access to WESTLAW databases and FREESTYLE for access to the LEXIS database. 30 legal issues in natural langugae queries were presented to identical libraries in both systems. The top 20 ranked documents from each search were analyzed and reviewed for relevance to the legal issue
  20. Conceptual structures : theory, tools and applications. 6th International Conference on Conceptual Structures, ICCS'98, Montpellier, France, August, 10-12, 1998, Proceedings (1998) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 1378) [ClassicSimilarity], result of:
              0.08964503 = score(doc=1378,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 1378, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1378)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 6th International Conference on Conceptual Structures, ICCS'98, held in Montpellier, France, in August 1998. The 20 revised full papers and 10 research reports presented were carefully selected from a total of 66 submissions; also included are three invited contributions. The volume is divided in topical sections on knowledge representation and knowledge engineering, tools, conceptual graphs and other models, relationships with logics, algorithms and complexity, natural language processing, and applications.

Years

Languages

  • e 55
  • d 17
  • m 1
  • ru 1
  • More… Less…

Types

  • a 56
  • el 6
  • m 6
  • s 6
  • p 2
  • x 2
  • d 1
  • r 1
  • More… Less…