Search (4 results, page 1 of 1)

  • × author_ss:"Savoy, J."
  1. Savoy, J.; Ndarugendamwo, M.; Vrajitoru, D.: Report on the TREC-4 experiment : combining probabilistic and vector-space schemes (1996) 0.02
    0.019222366 = product of:
      0.03844473 = sum of:
        0.03844473 = product of:
          0.07688946 = sum of:
            0.07688946 = weight(_text_:k in 7574) [ClassicSimilarity], result of:
              0.07688946 = score(doc=7574,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.47329018 = fieldWeight in 7574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7574)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  2. Savoy, J.: Text representation strategies : an example with the State of the union addresses (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3042) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3042,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3042)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on State of the Union addresses from 1790 to 2014 (225 speeches delivered by 42 presidents), this paper describes and evaluates different text representation strategies. To determine the most important words of a given text, the term frequencies (tf) or the tf?idf weighting scheme can be applied. Recently, latent Dirichlet allocation (LDA) has been proposed to define the topics included in a corpus. As another strategy, this study proposes to apply a vocabulary specificity measure (Z?score) to determine the most significantly overused word-types or short sequences of them. Our experiments show that the simple term frequency measure is not able to discriminate between specific terms associated with a document or a set of texts. Using the tf idf or LDA approach, the selection requires some arbitrary decisions. Based on the term-specific measure (Z?score), the term selection has a clear theoretical basis. Moreover, the most significant sentences for each presidency can be determined. As another facet, we can visualize the dynamic evolution of usage of some terms associated with their specificity measures. Finally, this technique can be employed to define the most important lexical leaders introducing terms overused by the k following presidencies.
  3. Ikae, C.; Savoy, J.: Gender identification on Twitter (2022) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 445) [ClassicSimilarity], result of:
              0.032037273 = score(doc=445,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To determine the author of a text's gender, various feature types have been suggested (e.g., function words, n-gram of letters, etc.) leading to a huge number of stylistic markers. To determine the target category, different machine learning models have been suggested (e.g., logistic regression, decision tree, k nearest-neighbors, support vector machine, naïve Bayes, neural networks, and random forest). In this study, our first objective is to know whether or not the same model always proposes the best effectiveness when considering similar corpora under the same conditions. Thus, based on 7 CLEF-PAN collections, this study analyzes the effectiveness of 10 different classifiers. Our second aim is to propose a 2-stage feature selection to reduce the feature size to a few hundred terms without any significant change in the performance level compared to approaches using all the attributes (increase of around 5% after applying the proposed feature selection). Based on our experiments, neural network or random forest tend, on average, to produce the highest effectiveness. Moreover, empirical evidence indicates that reducing the feature set size to around 300 without penalizing the effectiveness is possible. Finally, based on such reduced feature sizes, an analysis reveals some of the specific terms that clearly discriminate between the 2 genders.
  4. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.01
    0.007707316 = product of:
      0.015414632 = sum of:
        0.015414632 = product of:
          0.030829264 = sum of:
            0.030829264 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
              0.030829264 = score(doc=2937,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19345059 = fieldWeight in 2937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2937)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 5.2016 21:22:27