Search (139 results, page 1 of 7)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.064629115 = sum of:
      0.052651923 = product of:
        0.2106077 = sum of:
          0.2106077 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2106077 = score(doc=562,freq=2.0), product of:
              0.37473476 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044200785 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.011977192 = product of:
        0.035931576 = sum of:
          0.035931576 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.035931576 = score(doc=562,freq=2.0), product of:
              0.15478362 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044200785 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Doval, Y.; Gómez-Rodríguez, C.: Comparing neural- and N-gram-based language models for word segmentation (2019) 0.04
    0.04024851 = product of:
      0.08049702 = sum of:
        0.08049702 = product of:
          0.120745525 = sum of:
            0.056549463 = weight(_text_:y in 4675) [ClassicSimilarity], result of:
              0.056549463 = score(doc=4675,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.26585007 = fieldWeight in 4675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4675)
            0.06419606 = weight(_text_:n in 4675) [ClassicSimilarity], result of:
              0.06419606 = score(doc=4675,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.33684817 = fieldWeight in 4675, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4675)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Word segmentation is the task of inserting or deleting word boundary characters in order to separate character sequences that correspond to words in some language. In this article we propose an approach based on a beam search algorithm and a language model working at the byte/character level, the latter component implemented either as an n-gram model or a recurrent neural network. The resulting system analyzes the text input with no word boundaries one token at a time, which can be a character or a byte, and uses the information gathered by the language model to determine if a boundary must be placed in the current position or not. Our aim is to use this system in a preprocessing step for a microtext normalization system. This means that it needs to effectively cope with the data sparsity present on this kind of texts. We also strove to surpass the performance of two readily available word segmentation systems: The well-known and accessible Word Breaker by Microsoft, and the Python module WordSegment by Grant Jenks. The results show that we have met our objectives, and we hope to continue to improve both the precision and the efficiency of our system in the future.
  3. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.03
    0.034378495 = product of:
      0.06875699 = sum of:
        0.06875699 = product of:
          0.10313548 = sum of:
            0.039584626 = weight(_text_:y in 6671) [ClassicSimilarity], result of:
              0.039584626 = score(doc=6671,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.18609504 = fieldWeight in 6671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
            0.06355085 = weight(_text_:n in 6671) [ClassicSimilarity], result of:
              0.06355085 = score(doc=6671,freq=8.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.33346266 = fieldWeight in 6671, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
    Editor
    Belkin, N.; Ingwersen, P.; Pejtersen, A.M.
  4. Natural language processing (1996) 0.03
    0.030159716 = product of:
      0.06031943 = sum of:
        0.06031943 = product of:
          0.18095829 = sum of:
            0.18095829 = weight(_text_:y in 6824) [ClassicSimilarity], result of:
              0.18095829 = score(doc=6824,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.8507202 = fieldWeight in 6824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.125 = fieldNorm(doc=6824)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Editor
    Wilks, Y.
  5. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.03
    0.028830817 = product of:
      0.057661634 = sum of:
        0.057661634 = product of:
          0.08649245 = sum of:
            0.056549463 = weight(_text_:y in 1171) [ClassicSimilarity], result of:
              0.056549463 = score(doc=1171,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.26585007 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
            0.029942982 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029942982 = score(doc=1171,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    23.11.2023 19:07:22
  6. Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness (2022) 0.03
    0.026657674 = product of:
      0.05331535 = sum of:
        0.05331535 = product of:
          0.15994604 = sum of:
            0.15994604 = weight(_text_:y in 861) [ClassicSimilarity], result of:
              0.15994604 = score(doc=861,freq=4.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.75193757 = fieldWeight in 861, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.078125 = fieldNorm(doc=861)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    ¬The Economist. 2022, [https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas?giftId=89e08696-9884-4670-b164-df58fffdf067]
  7. Guthrie, L.; Pustejovsky, J.; Wilks, Y.; Slator, B.M.: ¬The role of lexicons in natural language processing (1996) 0.03
    0.026389752 = product of:
      0.052779503 = sum of:
        0.052779503 = product of:
          0.1583385 = sum of:
            0.1583385 = weight(_text_:y in 6825) [ClassicSimilarity], result of:
              0.1583385 = score(doc=6825,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.7443802 = fieldWeight in 6825, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6825)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026325962 = product of:
      0.052651923 = sum of:
        0.052651923 = product of:
          0.2106077 = sum of:
            0.2106077 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2106077 = score(doc=862,freq=2.0), product of:
                0.37473476 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044200785 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  9. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.03
    0.025112148 = product of:
      0.050224297 = sum of:
        0.050224297 = product of:
          0.07533644 = sum of:
            0.045393463 = weight(_text_:n in 190) [ClassicSimilarity], result of:
              0.045393463 = score(doc=190,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.23818761 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
            0.029942982 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.029942982 = score(doc=190,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    14. 4.2007 10:04:22
    Editor
    Weber, N.
  10. ISO/DIS 1087-2:1994-09: Terminology work, vocabulary : pt.2: computational aids (1994) 0.02
    0.024209848 = product of:
      0.048419695 = sum of:
        0.048419695 = product of:
          0.14525908 = sum of:
            0.14525908 = weight(_text_:n in 2912) [ClassicSimilarity], result of:
              0.14525908 = score(doc=2912,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.76220036 = fieldWeight in 2912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2912)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    n
  11. ISO/TR 12618:1994: Computational aids in terminology : creation and use of terminological databases and text corpora (1994) 0.02
    0.024209848 = product of:
      0.048419695 = sum of:
        0.048419695 = product of:
          0.14525908 = sum of:
            0.14525908 = weight(_text_:n in 2913) [ClassicSimilarity], result of:
              0.14525908 = score(doc=2913,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.76220036 = fieldWeight in 2913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2913)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    n
  12. Sager, N.: Natural language information processing (1981) 0.02
    0.024209848 = product of:
      0.048419695 = sum of:
        0.048419695 = product of:
          0.14525908 = sum of:
            0.14525908 = weight(_text_:n in 5313) [ClassicSimilarity], result of:
              0.14525908 = score(doc=5313,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.76220036 = fieldWeight in 5313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=5313)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Ding, Y.; Chowdhury, G.C.; Foo, S.: Incorporating the results of co-word analyses to increase search variety for information retrieval (2000) 0.02
    0.022619788 = product of:
      0.045239575 = sum of:
        0.045239575 = product of:
          0.13571872 = sum of:
            0.13571872 = weight(_text_:y in 6328) [ClassicSimilarity], result of:
              0.13571872 = score(doc=6328,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.6380402 = fieldWeight in 6328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6328)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. McKevitt, P.; Partridge, D.; Wilks, Y.: Why machines should analyse intention in natural language dialogue (1999) 0.02
    0.022619788 = product of:
      0.045239575 = sum of:
        0.045239575 = product of:
          0.13571872 = sum of:
            0.13571872 = weight(_text_:y in 366) [ClassicSimilarity], result of:
              0.13571872 = score(doc=366,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.6380402 = fieldWeight in 366, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.09375 = fieldNorm(doc=366)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.02
    0.021183617 = product of:
      0.042367235 = sum of:
        0.042367235 = product of:
          0.1271017 = sum of:
            0.1271017 = weight(_text_:n in 6445) [ClassicSimilarity], result of:
              0.1271017 = score(doc=6445,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.6669253 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Ahmed, F.; Nürnberger, A.: Evaluation of n-gram conflation approaches for Arabic text retrieval (2009) 0.02
    0.020300575 = product of:
      0.04060115 = sum of:
        0.04060115 = product of:
          0.12180345 = sum of:
            0.12180345 = weight(_text_:n in 2941) [ClassicSimilarity], result of:
              0.12180345 = score(doc=2941,freq=10.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.63912445 = fieldWeight in 2941, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2941)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we present a language-independent approach for conflation that does not depend on predefined rules or prior knowledge of the target language. The proposed unsupervised method is based on an enhancement of the pure n-gram model that can group related words based on various string-similarity measures, while restricting the search to specific locations of the target word by taking into account the order of n-grams. We show that the method is effective to achieve high score similarities for all word-form variations and reduces the ambiguity, i.e., obtains a higher precision and recall, compared to pure n-gram-based approaches for English, Portuguese, and Arabic. The proposed method is especially suited for conflation approaches in Arabic, since Arabic is a highly inflectional language. Therefore, we present in addition an adaptive user interface for Arabic text retrieval called araSearch. araSearch serves as a metasearch interface to existing search engines. The system is able to extend a query using the proposed conflation approach such that additional results for relevant subwords can be found automatically.
    Object
    n-grams
  17. Hofstadter, D.: Artificial neural networks today are not conscious (2022) 0.02
    0.018849822 = product of:
      0.037699644 = sum of:
        0.037699644 = product of:
          0.11309893 = sum of:
            0.11309893 = weight(_text_:y in 860) [ClassicSimilarity], result of:
              0.11309893 = score(doc=860,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.53170013 = fieldWeight in 860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.078125 = fieldNorm(doc=860)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness..
  18. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.02
    0.018157385 = product of:
      0.03631477 = sum of:
        0.03631477 = product of:
          0.10894431 = sum of:
            0.10894431 = weight(_text_:n in 733) [ClassicSimilarity], result of:
              0.10894431 = score(doc=733,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.57165027 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  19. Alonge, A.; Calzolari, N.; Vossen, P.; Bloksma, L.; Castellon, I.; Marti, M.A.; Peters, W.: ¬The linguistic design of the EuroWordNet database (1998) 0.02
    0.018157385 = product of:
      0.03631477 = sum of:
        0.03631477 = product of:
          0.10894431 = sum of:
            0.10894431 = weight(_text_:n in 6440) [ClassicSimilarity], result of:
              0.10894431 = score(doc=6440,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.57165027 = fieldWeight in 6440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6440)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  20. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.02
    0.018157385 = product of:
      0.03631477 = sum of:
        0.03631477 = product of:
          0.10894431 = sum of:
            0.10894431 = weight(_text_:n in 6501) [ClassicSimilarity], result of:
              0.10894431 = score(doc=6501,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.57165027 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Languages

  • e 111
  • d 22
  • f 3
  • chi 1
  • m 1
  • More… Less…

Types

  • a 111
  • el 19
  • m 9
  • s 9
  • x 4
  • n 2
  • p 2
  • d 1
  • More… Less…

Classifications