Search (489 results, page 2 of 25)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Greengrass, M.: Conflation methods for searching databases of Latin text (1996) 0.05
    0.049830075 = product of:
      0.09966015 = sum of:
        0.041327372 = weight(_text_:retrieval in 6987) [ClassicSimilarity], result of:
          0.041327372 = score(doc=6987,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33085006 = fieldWeight in 6987, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6987)
        0.029945528 = weight(_text_:use in 6987) [ClassicSimilarity], result of:
          0.029945528 = score(doc=6987,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 6987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6987)
        0.020662563 = weight(_text_:of in 6987) [ClassicSimilarity], result of:
          0.020662563 = score(doc=6987,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31997898 = fieldWeight in 6987, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6987)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 6987) [ClassicSimilarity], result of:
              0.01544937 = score(doc=6987,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 6987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6987)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Describes the results of a project to develop conflation tools for searching databases of Latin text. Reports on the results of a questionnaire sent to 64 users of Latin text retrieval systems. Describes a Latin stemming algorithm that uses a simple longest match with some recoding but differs from most stemmers in its use of 2 separate suffix dictionaries for processing query and database words. Describes a retrieval system in which a user inputs the principal component of their search term, these components are stemmed and the resulting stems matched against the noun based and verb based stem dictionaries. Evaluates the system, describing its limitations, and a more complex system
  2. Litkowski, K.C.: Category development based on semantic principles (1997) 0.05
    0.049800795 = product of:
      0.09960159 = sum of:
        0.029222867 = weight(_text_:retrieval in 1824) [ClassicSimilarity], result of:
          0.029222867 = score(doc=1824,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 1824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1824)
        0.029945528 = weight(_text_:use in 1824) [ClassicSimilarity], result of:
          0.029945528 = score(doc=1824,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 1824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1824)
        0.027053645 = weight(_text_:of in 1824) [ClassicSimilarity], result of:
          0.027053645 = score(doc=1824,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.41895083 = fieldWeight in 1824, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1824)
        0.013379549 = product of:
          0.026759097 = sum of:
            0.026759097 = weight(_text_:on in 1824) [ClassicSimilarity], result of:
              0.026759097 = score(doc=1824,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29462588 = fieldWeight in 1824, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1824)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Describes the beginnings of computerized information retrieval and text analysis, particularly from the perspective of the use of thesauri and cataloguing systems. Describes formalisations of linguistic principles in the development of formal grammars and semantics. Presents the principles for category development, based on research in linguistic formalism continuing with ever richer grammars and semantic formalism. Descrines the progress of these formalisms in the examiniation of the categories used in Minnesota Contextual Content Analysis approach. Describes current research toward an integration of semantic principles into content analysis abstraction procedures for characterising the category of any text
    Footnote
    Contribution to a symposium based on presentations made at a panel of the 7th annual Conference of the Social Science Computing Association entitled Possibilities in Computer Content Analysis of Text, Minneapolis, Minnesota, USA, 1996
  3. Jones, D.: Analogical natural language processing (1996) 0.05
    0.04963594 = product of:
      0.1323625 = sum of:
        0.048399284 = weight(_text_:use in 4698) [ClassicSimilarity], result of:
          0.048399284 = score(doc=4698,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.3827611 = fieldWeight in 4698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0625 = fieldNorm(doc=4698)
        0.012622404 = weight(_text_:of in 4698) [ClassicSimilarity], result of:
          0.012622404 = score(doc=4698,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.19546966 = fieldWeight in 4698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4698)
        0.071340814 = product of:
          0.14268163 = sum of:
            0.14268163 = weight(_text_:computers in 4698) [ClassicSimilarity], result of:
              0.14268163 = score(doc=4698,freq=4.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.6571921 = fieldWeight in 4698, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4698)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    COMPASS
    Computers / Use of / Natural language
    Subject
    Computers / Use of / Natural language
  4. Herrera-Viedma, E.: Modeling the retrieval process for an information retrieval system using an ordinal fuzzy linguistic approach (2001) 0.05
    0.048779603 = product of:
      0.097559206 = sum of:
        0.036153924 = weight(_text_:retrieval in 5752) [ClassicSimilarity], result of:
          0.036153924 = score(doc=5752,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.28943354 = fieldWeight in 5752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5752)
        0.030249555 = weight(_text_:use in 5752) [ClassicSimilarity], result of:
          0.030249555 = score(doc=5752,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23922569 = fieldWeight in 5752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5752)
        0.017640345 = weight(_text_:of in 5752) [ClassicSimilarity], result of:
          0.017640345 = score(doc=5752,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 5752, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5752)
        0.013515383 = product of:
          0.027030766 = sum of:
            0.027030766 = weight(_text_:on in 5752) [ClassicSimilarity], result of:
              0.027030766 = score(doc=5752,freq=12.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29761705 = fieldWeight in 5752, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5752)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    A linguistic model for an Information Retrieval System (IRS) defined using an ordinal fuzzy linguistic approach is proposed. The ordinal fuzzy linguistic approach is presented, and its use for modeling the imprecision and subjectivity that appear in the user-IRS interaction is studied. The user queries and IRS responses are modeled linguistically using the concept of fuzzy linguistic variables. The system accepts Boolean queries whose terms can be weighted simultaneously by means of ordinal linguistic values according to three possible semantics: a symmetrical threshold semantic, a quantitative semantic, and an importance semantic. The first one identifies a new threshold semantic used to express qualitative restrictions on the documents retrieved for a given term. It is monotone increasing in index term weight for the threshold values that are on the right of the mid-value, and decreasing for the threshold values that are on the left of the mid-value. The second one is a new semantic proposal introduced to express quantitative restrictions on the documents retrieved for a term, i.e., restrictions on the number of documents that must be retrieved containing that term. The last one is the usual semantic of relative importance that has an effect when the term is in a Boolean expression. A bottom-up evaluation mechanism of queries is presented that coherently integrates the use of the three semantics and satisfies the separability property. The advantage of this IRS with respect to others is that users can express linguistically different semantic restrictions on the desired documents simultaneously, incorporating more flexibility in the user-IRS interaction
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.6, S.460-475
  5. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.05
    0.048159793 = product of:
      0.096319586 = sum of:
        0.03267216 = weight(_text_:retrieval in 3645) [ClassicSimilarity], result of:
          0.03267216 = score(doc=3645,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.26155996 = fieldWeight in 3645, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.014972764 = weight(_text_:use in 3645) [ClassicSimilarity], result of:
          0.014972764 = score(doc=3645,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.11841066 = fieldWeight in 3645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.017463053 = weight(_text_:of in 3645) [ClassicSimilarity], result of:
          0.017463053 = score(doc=3645,freq=40.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 3645, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3645)
        0.031211607 = product of:
          0.062423214 = sum of:
            0.062423214 = weight(_text_:computers in 3645) [ClassicSimilarity], result of:
              0.062423214 = score(doc=3645,freq=4.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.28752154 = fieldWeight in 3645, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The selection that follows was chosen as it represents "a very early paper an the possibilities allowed by computers an documentation." In the early 1960s computers were being used to provide simple automatic indexing systems wherein keywords were extracted from documents. The problem with such systems was that they lacked vocabulary control, thus documents related in subject matter were not always collocated in retrieval. To improve retrieval by improving recall is the raison d'être of vocabulary control tools such as classifications and thesauri. The question arose whether it was possible by automatic means to construct classes of terms, which when substituted, one for another, could be used to improve retrieval performance? One of the first theoretical approaches to this question was initiated by R. M. Needham and Karen Sparck Jones at the Cambridge Language Research Institute in England.t The question was later pursued using experimental methodologies by Sparck Jones, who, as a Senior Research Associate in the Computer Laboratory at the University of Cambridge, has devoted her life's work to research in information retrieval and automatic naturai language processing. Based an the principles of numerical taxonomy, automatic classification techniques start from the premise that two objects are similar to the degree that they share attributes in common. When these two objects are keywords, their similarity is measured in terms of the number of documents they index in common. Step 1 in automatic classification is to compute mathematically the degree to which two terms are similar. Step 2 is to group together those terms that are "most similar" to each other, forming equivalence classes of intersubstitutable terms. The technique for forming such classes varies and is the factor that characteristically distinguishes different approaches to automatic classification. The technique used by Needham and Sparck Jones, that of clumping, is described in the selection that follows. Questions that must be asked are whether the use of automatically generated classes really does improve retrieval performance and whether there is a true eco nomic advantage in substituting mechanical for manual labor. Several years after her work with clumping, Sparck Jones was to observe that while it was not wholly satisfactory in itself, it was valuable in that it stimulated research into automatic classification. To this it might be added that it was valuable in that it introduced to libraryl information science the methods of numerical taxonomy, thus stimulating us to think again about the fundamental nature and purpose of classification. In this connection it might be useful to review how automatically derived classes differ from those of manually constructed classifications: 1) the manner of their derivation is purely a posteriori, the ultimate operationalization of the principle of literary warrant; 2) the relationship between members forming such classes is essentially statistical; the members of a given class are similar to each other not because they possess the class-defining characteristic but by virtue of sharing a family resemblance; and finally, 3) automatically derived classes are not related meaningfully one to another, that is, they are not ordered in traditional hierarchical and precedence relationships.
    Footnote
    Original in: Journal of documentation 20(1964) no.1, S.5-15.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  6. Rindflesch, T.C.; Aronson, A.R.: Semantic processing in information retrieval (1993) 0.05
    0.047558818 = product of:
      0.12682351 = sum of:
        0.06534432 = weight(_text_:retrieval in 4121) [ClassicSimilarity], result of:
          0.06534432 = score(doc=4121,freq=10.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.5231199 = fieldWeight in 4121, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.042349376 = weight(_text_:use in 4121) [ClassicSimilarity], result of:
          0.042349376 = score(doc=4121,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.33491597 = fieldWeight in 4121, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
        0.019129815 = weight(_text_:of in 4121) [ClassicSimilarity], result of:
          0.019129815 = score(doc=4121,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.29624295 = fieldWeight in 4121, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4121)
      0.375 = coord(3/8)
    
    Abstract
    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS® domain model, which we exploit extensively, significantly contributes to the feasibility of this processing.
  7. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.05
    0.046961453 = product of:
      0.093922906 = sum of:
        0.041327372 = weight(_text_:retrieval in 2345) [ClassicSimilarity], result of:
          0.041327372 = score(doc=2345,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33085006 = fieldWeight in 2345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.022089208 = weight(_text_:of in 2345) [ClassicSimilarity], result of:
          0.022089208 = score(doc=2345,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34207192 = fieldWeight in 2345, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.010924355 = product of:
          0.02184871 = sum of:
            0.02184871 = weight(_text_:on in 2345) [ClassicSimilarity], result of:
              0.02184871 = score(doc=2345,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.24056101 = fieldWeight in 2345, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
        0.019581974 = product of:
          0.039163947 = sum of:
            0.039163947 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.039163947 = score(doc=2345,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  8. Argamon, S.; Whitelaw, C.; Chase, P.; Hota, S.R.; Garg, N.; Levitan, S.: Stylistic text classification using functional lexical features (2007) 0.05
    0.046456493 = product of:
      0.09291299 = sum of:
        0.025048172 = weight(_text_:retrieval in 280) [ClassicSimilarity], result of:
          0.025048172 = score(doc=280,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.20052543 = fieldWeight in 280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=280)
        0.036299463 = weight(_text_:use in 280) [ClassicSimilarity], result of:
          0.036299463 = score(doc=280,freq=4.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.2870708 = fieldWeight in 280, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=280)
        0.022201622 = weight(_text_:of in 280) [ClassicSimilarity], result of:
          0.022201622 = score(doc=280,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34381276 = fieldWeight in 280, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=280)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 280) [ClassicSimilarity], result of:
              0.018727465 = score(doc=280,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 280, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=280)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Most text analysis and retrieval work to date has focused on the topic of a text; that is, what it is about. However, a text also contains much useful information in its style, or how it is written. This includes information about its author, its purpose, feelings it is meant to evoke, and more. This article develops a new type of lexical feature for use in stylistic text classification, based on taxonomies of various semantic functions of certain choice words or phrases. We demonstrate the usefulness of such features for the stylistic text classification tasks of determining author identity and nationality, the gender of literary characters, a text's sentiment (positive/ negative evaluation), and the rhetorical character of scientific journal articles. We further show how the use of functional features aids in gaining insight about stylistic differences among different kinds of texts.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.802-822
  9. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.05
    0.04610965 = product of:
      0.0922193 = sum of:
        0.040903494 = weight(_text_:retrieval in 5044) [ClassicSimilarity], result of:
          0.040903494 = score(doc=5044,freq=12.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.32745665 = fieldWeight in 5044, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5044)
        0.02963839 = weight(_text_:use in 5044) [ClassicSimilarity], result of:
          0.02963839 = score(doc=5044,freq=6.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23439234 = fieldWeight in 5044, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.03125 = fieldNorm(doc=5044)
        0.011807178 = weight(_text_:of in 5044) [ClassicSimilarity], result of:
          0.011807178 = score(doc=5044,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.18284513 = fieldWeight in 5044, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=5044)
        0.009870241 = product of:
          0.019740483 = sum of:
            0.019740483 = weight(_text_:on in 5044) [ClassicSimilarity], result of:
              0.019740483 = score(doc=5044,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21734878 = fieldWeight in 5044, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5044)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Opinion mining is one of the most important research tasks in the information retrieval research community. With the huge volume of opinionated data available on the Web, approaches must be developed to differentiate opinion from fact. In this paper, we present a lexicon-based approach for opinion retrieval. Generally, opinion retrieval consists of two stages: relevance to the query and opinion detection. In our work, we focus on the second state which itself focusses on detecting opinionated documents . We compare the document to be analyzed with opinionated sources that contain subjective information. We hypothesize that a document with a strong similarity to opinionated sources is more likely to be opinionated itself. Typical lexicon-based approaches treat and choose their opinion sources according to their test collection, then calculate the opinion score based on the frequency of subjective terms in the document. In our work, we use different open opinion collections without any specific treatment and consider them as a reference collection. We then use language models to determine opinion scores. The analysis document and reference collection are represented by different language models (i.e., Dirichlet, Jelinek-Mercer and two-stage models). These language models are generally used in information retrieval to represent the relationship between documents and queries. However, in our study, we modify these language models to represent opinionated documents. We carry out several experiments using Text REtrieval Conference (TREC) Blogs 06 as our analysis collection and Internet Movie Data Bases (IMDB), Multi-Perspective Question Answering (MPQA) and CHESLY as our reference collection. To improve opinion detection, we study the impact of using different language models to represent the document and reference collection alongside different combinations of opinion and retrieval scores. We then use this data to deduce the best opinion detection models. Using the best models, our approach improves on the best baseline of TREC Blog (baseline4) by 30%.
  10. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Grossman, D.; Frieder, O; Goharian, N.: Fusion of effective retrieval strategies in the same information retrieval system (2004) 0.05
    0.04597036 = product of:
      0.12258763 = sum of:
        0.07920927 = weight(_text_:retrieval in 2502) [ClassicSimilarity], result of:
          0.07920927 = score(doc=2502,freq=20.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.63411707 = fieldWeight in 2502, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
        0.025667597 = weight(_text_:use in 2502) [ClassicSimilarity], result of:
          0.025667597 = score(doc=2502,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 2502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
        0.017710768 = weight(_text_:of in 2502) [ClassicSimilarity], result of:
          0.017710768 = score(doc=2502,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2742677 = fieldWeight in 2502, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
      0.375 = coord(3/8)
    
    Abstract
    Prior efforts have shown that under certain situations retrieval effectiveness may be improved via the use of data fusion techniques. Although these improvements have been observed from the fusion of result sets from several distinct information retrieval systems, it has often been thought that fusing different document retrieval strategies in a single information retrieval system will lead to similar improvements. In this study, we show that this is not the case. We hold constant systemic differences such as parsing, stemming, phrase processing, and relevance feedback, and fuse result sets generated from highly effective retrieval strategies in the same information retrieval system. From this, we show that data fusion of highly effective retrieval strategies alone shows little or no improvement in retrieval effectiveness. Furthermore, we present a detailed analysis of the performance of modern data fusion approaches, and demonstrate the reasons why they do not perform weIl when applied to this problem. Detailed results and analyses are included to support our conclusions.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.10, S.859-868
  11. Pollitt, A.S.; Ellis, G.: Multilingual access to document databases (1993) 0.05
    0.04500523 = product of:
      0.09001046 = sum of:
        0.04338471 = weight(_text_:retrieval in 1302) [ClassicSimilarity], result of:
          0.04338471 = score(doc=1302,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.34732026 = fieldWeight in 1302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1302)
        0.025667597 = weight(_text_:use in 1302) [ClassicSimilarity], result of:
          0.025667597 = score(doc=1302,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 1302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=1302)
        0.011594418 = weight(_text_:of in 1302) [ClassicSimilarity], result of:
          0.011594418 = score(doc=1302,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17955035 = fieldWeight in 1302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1302)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 1302) [ClassicSimilarity], result of:
              0.018727465 = score(doc=1302,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 1302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1302)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper examines the reasons why approaches to facilitate document retrieval which apply AI (Artificial Intelligence) or Expert Systems techniques, relying on so-called "natural language" query statements from the end-user will result in sub-optimal solutions. It does so by reflecting on the nature of language and the fundamental problems in document retrieval. Support is given to the work of thesaurus builders and indexers with illustrations of how their work may be utilised in a generally applicable computer-based document retrieval system using Multilingual MenUSE software. The EuroMenUSE interface providing multilingual document access to EPOQUE, the European Parliament's Online Query System is described.
    Source
    Information as a Global Commodity - Communication, Processing and Use (CAIS/ACSI '93) : 21st Annual Conference Canadian Association for Information Science, Antigonish, Nova Scotia, Canada. July 1993
  12. Chowdhury, A.; Mccabe, M.C.: Improving information retrieval systems using part of speech tagging (1993) 0.04
    0.044440318 = product of:
      0.088880636 = sum of:
        0.035423465 = weight(_text_:retrieval in 1061) [ClassicSimilarity], result of:
          0.035423465 = score(doc=1061,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 1061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1061)
        0.025667597 = weight(_text_:use in 1061) [ClassicSimilarity], result of:
          0.025667597 = score(doc=1061,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 1061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=1061)
        0.021168415 = weight(_text_:of in 1061) [ClassicSimilarity], result of:
          0.021168415 = score(doc=1061,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.32781258 = fieldWeight in 1061, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1061)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 1061) [ClassicSimilarity], result of:
              0.013242318 = score(doc=1061,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 1061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1061)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The object of Information Retrieval is to retrieve all relevant documents for a user query and only those relevant documents. Much research has focused on achieving this objective with little regard for storage overhead or performance. In the paper we evaluate the use of Part of Speech Tagging to improve, the index storage overhead and general speed of the system with only a minimal reduction to precision recall measurements. We tagged 500Mbs of the Los Angeles Times 1990 and 1989 document collection provided by TREC for parts of speech. We then experimented to find the most relevant part of speech to index. We show that 90% of precision recall is achieved with 40% of the document collections terms. We also show that this is a improvement in overhead with only a 1% reduction in precision recall.
  13. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.04
    0.044256803 = product of:
      0.088513605 = sum of:
        0.048460644 = weight(_text_:retrieval in 6671) [ClassicSimilarity], result of:
          0.048460644 = score(doc=6671,freq=22.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.3879561 = fieldWeight in 6671, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.014972764 = weight(_text_:use in 6671) [ClassicSimilarity], result of:
          0.014972764 = score(doc=6671,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.11841066 = fieldWeight in 6671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.015619429 = weight(_text_:of in 6671) [ClassicSimilarity], result of:
          0.015619429 = score(doc=6671,freq=32.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 6671, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.009460769 = product of:
          0.018921537 = sum of:
            0.018921537 = weight(_text_:on in 6671) [ClassicSimilarity], result of:
              0.018921537 = score(doc=6671,freq=12.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20833194 = fieldWeight in 6671, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    The conference was organized by the Royal School of Librarianship in Copenhagen and was held in cooperation with AICA-GLIR (Italy), BCS-IRSG (UK), DD (Denmark), GI (Germany), INRIA (France). It had support from Apple Computer, Denmark. The volume contains the 32 papers and reports on the two panel sessions, moderated by W.B. Croft, and R. Kovetz, respectively
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  14. Renouf, A.: Sticking to the text : a corpus linguist's view of language (1993) 0.04
    0.043777823 = product of:
      0.08755565 = sum of:
        0.029222867 = weight(_text_:retrieval in 2314) [ClassicSimilarity], result of:
          0.029222867 = score(doc=2314,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.23394634 = fieldWeight in 2314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2314)
        0.029945528 = weight(_text_:use in 2314) [ClassicSimilarity], result of:
          0.029945528 = score(doc=2314,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 2314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2314)
        0.020662563 = weight(_text_:of in 2314) [ClassicSimilarity], result of:
          0.020662563 = score(doc=2314,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31997898 = fieldWeight in 2314, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2314)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 2314) [ClassicSimilarity], result of:
              0.01544937 = score(doc=2314,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 2314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2314)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Corpus linguistics is the study of large, computer held bodies of text. Some corpus linguists are concerned with language descriptions for its own sake. On the corpus-linguistic continuum, the study of raw ASCII text is situated at one end, and the study of heavily pre-coded text at the other. Discusses the use of word frequency to identify changes in the lexicon; word repetition and word positioning in automatic abstracting and word clusters in automatic text retrieval. Compares the machine extract with manual abstracts. Abstractors and indexers may find themselves taking the original wording of the text more into account as the focus moves towards the electronic medium and away from the hard copy
  15. Herrera-Viedma, E.; Cordón, O.; Herrera, J.C.; Luqe, M.: ¬An IRS based on multi-granular lnguistic information (2003) 0.04
    0.043425888 = product of:
      0.086851776 = sum of:
        0.035423465 = weight(_text_:retrieval in 2740) [ClassicSimilarity], result of:
          0.035423465 = score(doc=2740,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 2740, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
        0.025667597 = weight(_text_:use in 2740) [ClassicSimilarity], result of:
          0.025667597 = score(doc=2740,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 2740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
        0.016396983 = weight(_text_:of in 2740) [ClassicSimilarity], result of:
          0.016396983 = score(doc=2740,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 2740, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2740)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 2740) [ClassicSimilarity], result of:
              0.018727465 = score(doc=2740,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 2740, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2740)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    An information retrieval system (IRS) based on fuzzy multi-granular linguistic information is proposed. The system has an evaluation method to process multi-granular linguistic information, in such a way that the inputs to the IRS are represented in a different linguistic domain than the outputs. The system accepts Boolean queries whose terms are weighted by means of the ordinal linguistic values represented by the linguistic variable "Importance" assessed an a label set S. The system evaluates the weighted queries according to a threshold semantic and obtains the linguistic retrieval status values (RSV) of documents represented by a linguistic variable "Relevance" expressed in a different label set S'. The advantage of this linguistic IRS with respect to others is that the use of the multi-granular linguistic information facilitates and improves the IRS-user interaction
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  16. Abu-Salem, H.; Al-Omari, M.; Evens, M.W.: Stemming methodologies over individual query words for an Arabic information retrieval system (1999) 0.04
    0.043147296 = product of:
      0.08629459 = sum of:
        0.04174695 = weight(_text_:retrieval in 3672) [ClassicSimilarity], result of:
          0.04174695 = score(doc=3672,freq=8.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.33420905 = fieldWeight in 3672, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3672)
        0.021389665 = weight(_text_:use in 3672) [ClassicSimilarity], result of:
          0.021389665 = score(doc=3672,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 3672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3672)
        0.017640345 = weight(_text_:of in 3672) [ClassicSimilarity], result of:
          0.017640345 = score(doc=3672,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 3672, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3672)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 3672) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=3672,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 3672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3672)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Stemming is one of the most important factors that affect the performance of information retrieval systems. This article investigates how to improve the performance of an Arabic information retrieval system by imposing the retrieval method over individual words of a query depending on the importance of the WORD, the STEM, or the ROOT of the query terms in the database. This method, called Mxed Stemming, computes term importance using a weighting scheme that use the Term Frequency (TF) and the Inverse Document Frequency (IDF), called TFxIDF. An extended version of the Arabic IRS system is designed, implemented, and evaluated to reduce the number of irrelevant documents retrieved. The results of the experiment suggest that the proposed method outperforms the Word index method using the TFxIDF weighting scheme. It also outperforms the Stem index method using the Binary weighting scheme but does not outperform the Stem index method using the TFxIDF weighting scheme, and again it outperforms the Root index method using the Binary weighting scheme but does not outperform the Root index method using the TFxIDF weighting scheme
    Source
    Journal of the American Society for Information Science. 50(1999) no.6, S.524-529
  17. Zhou, L.; Zhang, D.: NLPIR: a theoretical framework for applying Natural Language Processing to information retrieval (2003) 0.04
    0.042711493 = product of:
      0.085422985 = sum of:
        0.035423465 = weight(_text_:retrieval in 5148) [ClassicSimilarity], result of:
          0.035423465 = score(doc=5148,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.2835858 = fieldWeight in 5148, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
        0.025667597 = weight(_text_:use in 5148) [ClassicSimilarity], result of:
          0.025667597 = score(doc=5148,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 5148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
        0.017710768 = weight(_text_:of in 5148) [ClassicSimilarity], result of:
          0.017710768 = score(doc=5148,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2742677 = fieldWeight in 5148, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 5148) [ClassicSimilarity], result of:
              0.013242318 = score(doc=5148,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 5148, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5148)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Zhou and Zhang believe that for the potential of natural language processing NLP to be reached in information retrieval a framework for guiding the effort should be in place. They provide a graphic model that identifies different levels of natural language processing effort during the query, document matching process. A direct matching approach uses little NLP, an expansion approach with thesauri, little more, but an extraction approach will often use a variety of NLP techniques, as well as statistical methods. A transformation approach which creates intermediate representations of documents and queries is a step higher in NLP usage, and a uniform approach, which relies on a body of knowledge beyond that of the documents and queries to provide inference and sense making prior to matching would require a maximum NPL effort.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.2, S.115-123
  18. Gachot, D.A.; Lange, E.; Yang, J.: ¬The SYSTRAN NLP browser : an application of machine translation technology in cross-language information retrieval (1998) 0.04
    0.042524934 = product of:
      0.11339982 = sum of:
        0.08676942 = weight(_text_:retrieval in 6213) [ClassicSimilarity], result of:
          0.08676942 = score(doc=6213,freq=6.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.6946405 = fieldWeight in 6213, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=6213)
        0.013388081 = weight(_text_:of in 6213) [ClassicSimilarity], result of:
          0.013388081 = score(doc=6213,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 6213, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=6213)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 6213) [ClassicSimilarity], result of:
              0.026484637 = score(doc=6213,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 6213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6213)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Series
    The Kluwer International series on information retrieval
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  19. Armstrong, G.: Computer-assisted literary analysis using the TACT a text-retrieval program (1996) 0.04
    0.04242603 = product of:
      0.11313608 = sum of:
        0.047231287 = weight(_text_:retrieval in 5690) [ClassicSimilarity], result of:
          0.047231287 = score(doc=5690,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 5690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5690)
        0.0154592255 = weight(_text_:of in 5690) [ClassicSimilarity], result of:
          0.0154592255 = score(doc=5690,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23940048 = fieldWeight in 5690, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5690)
        0.050445575 = product of:
          0.10089115 = sum of:
            0.10089115 = weight(_text_:computers in 5690) [ClassicSimilarity], result of:
              0.10089115 = score(doc=5690,freq=2.0), product of:
                0.21710795 = queryWeight, product of:
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.041294612 = queryNorm
                0.464705 = fieldWeight in 5690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.257537 = idf(docFreq=625, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5690)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    TACT is a text retrieval program which allows the user to work with a text in order to discover the location of lixical units an internal patterns. In literary analysis it permits investigation of the text's stylistic, grammatical and lexical features, as well as acting as a computerized concordance. Describes how it has been integrated into the teaching of Italian literature at Edinburgh University, Scotland
    Source
    Computers and texts. 1996, no.11, S.8-11
  20. Chowdhury, G.G.: Natural language processing and information retrieval : pt.1: basic issues; pt.2: major applications (1991) 0.04
    0.04236569 = product of:
      0.11297517 = sum of:
        0.059039105 = weight(_text_:retrieval in 3313) [ClassicSimilarity], result of:
          0.059039105 = score(doc=3313,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.47264296 = fieldWeight in 3313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3313)
        0.04277933 = weight(_text_:use in 3313) [ClassicSimilarity], result of:
          0.04277933 = score(doc=3313,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.3383162 = fieldWeight in 3313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.078125 = fieldNorm(doc=3313)
        0.011156735 = weight(_text_:of in 3313) [ClassicSimilarity], result of:
          0.011156735 = score(doc=3313,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17277241 = fieldWeight in 3313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3313)
      0.375 = coord(3/8)
    
    Abstract
    Reviews the basic issues and procedures involved in natural language processing of textual material for final use in information retrieval. Covers: natural language processing; natural language understanding; syntactic and semantic analysis; parsing; knowledge bases and knowledge representation

Languages

Types

  • a 418
  • el 44
  • m 36
  • s 18
  • p 7
  • x 6
  • b 1
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications