Search (23 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07284273 = sum of:
      0.054310877 = product of:
        0.21724351 = sum of:
          0.21724351 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21724351 = score(doc=562,freq=2.0), product of:
              0.38654187 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045593463 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018531853 = product of:
        0.037063707 = sum of:
          0.037063707 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.037063707 = score(doc=562,freq=2.0), product of:
              0.15966053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045593463 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.05
    0.048942618 = product of:
      0.097885236 = sum of:
        0.097885236 = sum of:
          0.054644246 = weight(_text_:r in 5483) [ClassicSimilarity], result of:
            0.054644246 = score(doc=5483,freq=4.0), product of:
              0.15092614 = queryWeight, product of:
                3.3102584 = idf(docFreq=4387, maxDocs=44218)
                0.045593463 = queryNorm
              0.3620595 = fieldWeight in 5483, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.3102584 = idf(docFreq=4387, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
          0.04324099 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
            0.04324099 = score(doc=5483,freq=2.0), product of:
              0.15966053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045593463 = queryNorm
              0.2708308 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  3. Bakar, Z.A.; Sembok, T.M.T.; Yusoff, M.: ¬An evaluation of retrieval effectiveness using spelling-correction and string-similarity matching methods on Malay texts (2000) 0.02
    0.020281417 = product of:
      0.040562835 = sum of:
        0.040562835 = product of:
          0.08112567 = sum of:
            0.08112567 = weight(_text_:r in 4804) [ClassicSimilarity], result of:
              0.08112567 = score(doc=4804,freq=12.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.537519 = fieldWeight in 4804, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article evaluates the effectiveness of spelling-correction and string-similarity matching methods in retrieving similar words in a Maly dictionary associated with a set of query words. The spelling-correction techniques used are SPEEDCOP, Soundex, Davidson, Phonic, and Hartlib. 2 dynamic-programming methods that measure longest common subsequence and edit-cost-distance are used. Several search combinations od query and doctionary words are performed in the experiments, the best being one that stems both query and dictionary words using an existing Malay stemming algorithm. the retrieval effectivness (E) and retrieved and relevant (R&R) mean measure are calculated from weighted combination of recall and precision values. Results from these experiments are then compared with available diagram, a string-similarity method. The best R&R and E results are given by using diagram. Editcost-distances produce the best E results, and both dynamic-programming methods rank second in finding R&R mean measures
  4. Humphreys, K.; Demetriou, G.; Gaizauskas, R.: Bioinformatics applications of information extraction from scientific journal articles (2000) 0.02
    0.01931966 = product of:
      0.03863932 = sum of:
        0.03863932 = product of:
          0.07727864 = sum of:
            0.07727864 = weight(_text_:r in 4545) [ClassicSimilarity], result of:
              0.07727864 = score(doc=4545,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.51202947 = fieldWeight in 4545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.018531853 = product of:
      0.037063707 = sum of:
        0.037063707 = product of:
          0.07412741 = sum of:
            0.07412741 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07412741 = score(doc=4888,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  6. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.02
    0.016559707 = product of:
      0.033119414 = sum of:
        0.033119414 = product of:
          0.06623883 = sum of:
            0.06623883 = weight(_text_:r in 6501) [ClassicSimilarity], result of:
              0.06623883 = score(doc=6501,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.4388824 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Xu, J.; Weischedel, R.; Licuanan, A.: Evaluation of an extraction-based approach to answering definitional questions (2004) 0.01
    0.013799756 = product of:
      0.027599512 = sum of:
        0.027599512 = product of:
          0.055199023 = sum of:
            0.055199023 = weight(_text_:r in 4107) [ClassicSimilarity], result of:
              0.055199023 = score(doc=4107,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.36573532 = fieldWeight in 4107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Perera, P.; Witte, R.: ¬A self-learning context-aware lemmatizer for German (2005) 0.01
    0.011039805 = product of:
      0.02207961 = sum of:
        0.02207961 = product of:
          0.04415922 = sum of:
            0.04415922 = weight(_text_:r in 4638) [ClassicSimilarity], result of:
              0.04415922 = score(doc=4638,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.29258826 = fieldWeight in 4638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.010919999 = product of:
      0.021839999 = sum of:
        0.021839999 = product of:
          0.043679997 = sum of:
            0.043679997 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.043679997 = score(doc=2541,freq=4.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  10. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.010810248 = product of:
      0.021620495 = sum of:
        0.021620495 = product of:
          0.04324099 = sum of:
            0.04324099 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04324099 = score(doc=156,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  11. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.010810248 = product of:
      0.021620495 = sum of:
        0.021620495 = product of:
          0.04324099 = sum of:
            0.04324099 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04324099 = score(doc=3840,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  12. ¬The semantics of relationships : an interdisciplinary perspective (2002) 0.01
    0.009757902 = product of:
      0.019515803 = sum of:
        0.019515803 = product of:
          0.039031606 = sum of:
            0.039031606 = weight(_text_:r in 1430) [ClassicSimilarity], result of:
              0.039031606 = score(doc=1430,freq=4.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.25861394 = fieldWeight in 1430, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1430)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: Pt.1: Types of relationships: CRUDE, D.A.: Hyponymy and its varieties; FELLBAUM, C.: On the semantics of troponymy; PRIBBENOW, S.: Meronymic relationships: from classical mereology to complex part-whole relations; KHOO, C. u.a.: The many facets of cause-effect relation - Pt.2: Relationships in knowledge representation and reasoning: GREEN, R.: Internally-structured conceptual models in cognitive semantics; HOVY, E.: Comparing sets of semantic relations in ontologies; GUARINO, N., C. WELTY: Identity and subsumption; JOUIS; C.: Logic of relationships - Pt.3: Applications of relationships: EVENS, M.: Thesaural relations in information retrieval; KHOO, C., S.H. MYAENG: Identifying semantic relations in text for information retrieval and information extraction; McCRAY, A.T., O. BODENREICHER: A conceptual framework for the biiomedical domain; HETZLER, B.: Visual analysis and exploration of relationships
    Editor
    Green, R., C.A. Bean u. S.H. Myaeng
  13. Green, R.: Automated identification of frame semantic relational structures (2000) 0.01
    0.00965983 = product of:
      0.01931966 = sum of:
        0.01931966 = product of:
          0.03863932 = sum of:
            0.03863932 = weight(_text_:r in 110) [ClassicSimilarity], result of:
              0.03863932 = score(doc=110,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.25601473 = fieldWeight in 110, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=110)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.009265927 = product of:
      0.018531853 = sum of:
        0.018531853 = product of:
          0.037063707 = sum of:
            0.037063707 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.037063707 = score(doc=4436,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  15. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.0082798535 = product of:
      0.016559707 = sum of:
        0.016559707 = product of:
          0.033119414 = sum of:
            0.033119414 = weight(_text_:r in 5480) [ClassicSimilarity], result of:
              0.033119414 = score(doc=5480,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2194412 = fieldWeight in 5480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  16. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.0082798535 = product of:
      0.016559707 = sum of:
        0.016559707 = product of:
          0.033119414 = sum of:
            0.033119414 = weight(_text_:r in 6014) [ClassicSimilarity], result of:
              0.033119414 = score(doc=6014,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2194412 = fieldWeight in 6014, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.01
    0.0078063207 = product of:
      0.015612641 = sum of:
        0.015612641 = product of:
          0.031225283 = sum of:
            0.031225283 = weight(_text_:r in 5188) [ClassicSimilarity], result of:
              0.031225283 = score(doc=5188,freq=4.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.20689115 = fieldWeight in 5188, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  18. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.01
    0.0077216057 = product of:
      0.015443211 = sum of:
        0.015443211 = product of:
          0.030886423 = sum of:
            0.030886423 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.030886423 = score(doc=4900,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Chandrasekar, R.; Bangalore, S.: Glean : using syntactic information in document filtering (2002) 0.01
    0.006899878 = product of:
      0.013799756 = sum of:
        0.013799756 = product of:
          0.027599512 = sum of:
            0.027599512 = weight(_text_:r in 4257) [ClassicSimilarity], result of:
              0.027599512 = score(doc=4257,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.18286766 = fieldWeight in 4257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Kettunen, K.; Kunttu, T.; Järvelin, K.: To stem or lemmatize a highly inflectional language in a probabilistic IR environment? (2005) 0.01
    0.006899878 = product of:
      0.013799756 = sum of:
        0.013799756 = product of:
          0.027599512 = sum of:
            0.027599512 = weight(_text_:r in 4395) [ClassicSimilarity], result of:
              0.027599512 = score(doc=4395,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.18286766 = fieldWeight in 4395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To show that stem generation compares well with lemmatization as a morphological tool for a highly inflectional language for IR purposes in a best-match retrieval system. Design/methodology/approach - Effects of three different morphological methods - lemmatization, stemming and stem production - for Finnish are compared in a probabilistic IR environment (INQUERY). Evaluation is done using a four-point relevance scale which is partitioned differently in different test settings. Findings - Results show that stem production, a lighter method than morphological lemmatization, compares well with lemmatization in a best-match IR environment. Differences in performance between stem production and lemmatization are small and they are not statistically significant in most of the tested settings. It is also shown that hitherto a rather neglected method of morphological processing for Finnish, stemming, performs reasonably well although the stemmer used - a Porter stemmer implementation - is far from optimal for a morphologically complex language like Finnish. In another series of tests, the effects of compound splitting and derivational expansion of queries are tested. Practical implications - Usefulness of morphological lemmatization and stem generation for IR purposes can be estimated with many factors. On the average P-R level they seem to behave very close to each other in a probabilistic IR system. Thus, the choice of the used method with highly inflectional languages needs to be estimated along other dimensions too. Originality/value - Results are achieved using Finnish as an example of a highly inflectional language. The results are of interest for anyone who is interested in processing of morphological variation of a highly inflected language for IR purposes.