Search (18418 results, page 1 of 921)

  • × language_ss:"e"
  1. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.57
    0.5686467 = product of:
      0.7108084 = sum of:
        0.032562904 = product of:
          0.097688705 = sum of:
            0.097688705 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.097688705 = score(doc=5820,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.011846555 = weight(_text_:information in 5820) [ClassicSimilarity], result of:
          0.011846555 = score(doc=5820,freq=16.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.21943474 = fieldWeight in 5820, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.024872115 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
          0.024872115 = score(doc=5820,freq=8.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.26736724 = fieldWeight in 5820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.088916 = weight(_text_:ranking in 5820) [ClassicSimilarity], result of:
          0.088916 = score(doc=5820,freq=10.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.5345266 = fieldWeight in 5820, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.13815269 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.13815269 = score(doc=5820,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.8 = coord(8/10)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  2. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.53
    0.52914715 = product of:
      1.0582943 = sum of:
        0.08140726 = product of:
          0.24422176 = sum of:
            0.24422176 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.24422176 = score(doc=1826,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.24422176 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.24422176 = score(doc=1826,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.24422176 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.24422176 = score(doc=1826,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.24422176 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.24422176 = score(doc=1826,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.24422176 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.24422176 = score(doc=1826,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(5/10)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.46
    0.45703867 = product of:
      0.7617311 = sum of:
        0.05698508 = product of:
          0.17095524 = sum of:
            0.17095524 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.17095524 = score(doc=306,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.17095524 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17095524 = score(doc=306,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17095524 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17095524 = score(doc=306,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17095524 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17095524 = score(doc=306,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17095524 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17095524 = score(doc=306,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.020925045 = product of:
          0.04185009 = sum of:
            0.04185009 = weight(_text_:evaluation in 306) [ClassicSimilarity], result of:
              0.04185009 = score(doc=306,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.32441732 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.6 = coord(6/10)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.44
    0.43888023 = product of:
      0.87776047 = sum of:
        0.048844352 = product of:
          0.14653306 = sum of:
            0.14653306 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.14653306 = score(doc=2514,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.20722903 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.20722903 = score(doc=2514,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.20722903 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.20722903 = score(doc=2514,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.20722903 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.20722903 = score(doc=2514,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.20722903 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.20722903 = score(doc=2514,freq=4.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.5 = coord(5/10)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.44
    0.43649817 = product of:
      0.62356883 = sum of:
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0062825847 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.0062825847 = score(doc=563,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.018654086 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.018654086 = score(doc=563,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.14653306 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.14653306 = score(doc=563,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.012499932 = product of:
          0.024999864 = sum of:
            0.024999864 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.024999864 = score(doc=563,freq=2.0), product of:
                0.107692726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.7 = coord(7/10)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.42
    0.42331773 = product of:
      0.84663546 = sum of:
        0.06512581 = product of:
          0.19537741 = sum of:
            0.19537741 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.19537741 = score(doc=230,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.19537741 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.19537741 = score(doc=230,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.19537741 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.19537741 = score(doc=230,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.19537741 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.19537741 = score(doc=230,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.19537741 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.19537741 = score(doc=230,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5 = coord(5/10)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.40
    0.39945948 = product of:
      0.49932435 = sum of:
        0.032562904 = product of:
          0.097688705 = sum of:
            0.097688705 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.097688705 = score(doc=701,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.012565169 = weight(_text_:information in 701) [ClassicSimilarity], result of:
          0.012565169 = score(doc=701,freq=18.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274568 = fieldWeight in 701, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.046531465 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
          0.046531465 = score(doc=701,freq=28.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5001983 = fieldWeight in 701, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.097688705 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.097688705 = score(doc=701,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.01690999 = product of:
          0.03381998 = sum of:
            0.03381998 = weight(_text_:evaluation in 701) [ClassicSimilarity], result of:
              0.03381998 = score(doc=701,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.2621688 = fieldWeight in 701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.8 = coord(8/10)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.39
    0.38848594 = product of:
      0.64747655 = sum of:
        0.048844352 = product of:
          0.14653306 = sum of:
            0.14653306 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.14653306 = score(doc=562,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.14653306 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.14653306 = score(doc=562,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.14653306 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.14653306 = score(doc=562,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.14653306 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.14653306 = score(doc=562,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.14653306 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.14653306 = score(doc=562,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.012499932 = product of:
          0.024999864 = sum of:
            0.024999864 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.024999864 = score(doc=562,freq=2.0), product of:
                0.107692726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(6/10)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  9. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.39
    0.38631693 = product of:
      0.64386153 = sum of:
        0.048844352 = product of:
          0.14653306 = sum of:
            0.14653306 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.14653306 = score(doc=2918,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.14653306 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.14653306 = score(doc=2918,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.14653306 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.14653306 = score(doc=2918,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.008884916 = weight(_text_:information in 2918) [ClassicSimilarity], result of:
          0.008884916 = score(doc=2918,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.16457605 = fieldWeight in 2918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.14653306 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.14653306 = score(doc=2918,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.14653306 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.14653306 = score(doc=2918,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.6 = coord(6/10)
    
    Abstract
    The employees of an organization often use a personal hierarchical classification scheme to organize digital documents that are stored on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk that the organization will lose track of needed documentation. Furthermore, the inherent boundaries of such a hierarchical structure require making arbitrary decisions about which specific criteria the classification will b.e based on (for instance, the administrative activity or the document type, although a document can have several attributes and require classification in several classes).A faceted classification model to support corporate information organization is proposed. Partially based on Ranganathan's facets theory, this model aims not only to standardize the organization of digital documents, but also to simplify the management of a document throughout its life cycle for both individuals and organizations, while ensuring compliance to regulatory and policy requirements.
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  10. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.38
    0.38475552 = product of:
      0.6412592 = sum of:
        0.048844352 = product of:
          0.14653306 = sum of:
            0.14653306 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.14653306 = score(doc=400,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.14653306 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14653306 = score(doc=400,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14653306 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14653306 = score(doc=400,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.0062825847 = weight(_text_:information in 400) [ClassicSimilarity], result of:
          0.0062825847 = score(doc=400,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.116372846 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14653306 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14653306 = score(doc=400,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.14653306 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.14653306 = score(doc=400,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6 = coord(6/10)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  11. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.32
    0.32193077 = product of:
      0.53655124 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.12211088 = score(doc=4997,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.007404097 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.007404097 = score(doc=4997,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 4997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.12211088 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.12211088 = score(doc=4997,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6 = coord(6/10)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
    Imprint
    Trento : University / Department of information engineering and computer science
  12. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.32
    0.3206296 = product of:
      0.53438264 = sum of:
        0.04070363 = product of:
          0.12211088 = sum of:
            0.12211088 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.12211088 = score(doc=76,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.12211088 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.12211088 = score(doc=76,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.12211088 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.12211088 = score(doc=76,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.005235487 = weight(_text_:information in 76) [ClassicSimilarity], result of:
          0.005235487 = score(doc=76,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.09697737 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.12211088 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.12211088 = score(doc=76,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.12211088 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.12211088 = score(doc=76,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.6 = coord(6/10)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
    Theme
    Information
  13. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.32
    0.3174883 = product of:
      0.6349766 = sum of:
        0.048844352 = product of:
          0.14653306 = sum of:
            0.14653306 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.14653306 = score(doc=862,freq=2.0), product of:
                0.2607266 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.030753274 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.14653306 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14653306 = score(doc=862,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14653306 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14653306 = score(doc=862,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14653306 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14653306 = score(doc=862,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14653306 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14653306 = score(doc=862,freq=2.0), product of:
            0.2607266 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030753274 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(5/10)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  14. Fan, W.; Gordon, M.D.; Pathak, P.: ¬A generic ranking function discovery framework by genetic programming for information retrieval (2004) 0.10
    0.099404015 = product of:
      0.24851003 = sum of:
        0.012695382 = weight(_text_:information in 2554) [ClassicSimilarity], result of:
          0.012695382 = score(doc=2554,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23515764 = fieldWeight in 2554, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2554)
        0.03077767 = weight(_text_:retrieval in 2554) [ClassicSimilarity], result of:
          0.03077767 = score(doc=2554,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 2554, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2554)
        0.18411194 = weight(_text_:ranking in 2554) [ClassicSimilarity], result of:
          0.18411194 = score(doc=2554,freq=14.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            1.1068056 = fieldWeight in 2554, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2554)
        0.020925045 = product of:
          0.04185009 = sum of:
            0.04185009 = weight(_text_:evaluation in 2554) [ClassicSimilarity], result of:
              0.04185009 = score(doc=2554,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.32441732 = fieldWeight in 2554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2554)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    Ranking functions play a substantial role in the performance of information retrieval (IR) systems and search engines. Although there are many ranking functions available in the IR literature, various empirical evaluation studies show that ranking functions do not perform consistently well across different contexts (queries, collections, users). Moreover, it is often difficult and very expensive for human beings to design optimal ranking functions that work well in all these contexts. In this paper, we propose a novel ranking function discovery framework based on Genetic Programming and show through various experiments how this new framework helps automate the ranking function design/discovery process.
    Source
    Information processing and management. 40(2004) no.4, S.587-602
  15. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.09
    0.088760436 = product of:
      0.2958681 = sum of:
        0.014659365 = weight(_text_:information in 3445) [ClassicSimilarity], result of:
          0.014659365 = score(doc=3445,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.27153665 = fieldWeight in 3445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3445)
        0.13917555 = weight(_text_:ranking in 3445) [ClassicSimilarity], result of:
          0.13917555 = score(doc=3445,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.8366664 = fieldWeight in 3445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.109375 = fieldNorm(doc=3445)
        0.14203319 = sum of:
          0.08370018 = weight(_text_:evaluation in 3445) [ClassicSimilarity], result of:
            0.08370018 = score(doc=3445,freq=2.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.64883465 = fieldWeight in 3445, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.109375 = fieldNorm(doc=3445)
          0.058333017 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
            0.058333017 = score(doc=3445,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.5416616 = fieldWeight in 3445, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3445)
      0.3 = coord(3/10)
    
    Date
    25. 8.2005 17:42:22
    Source
    Library and information research news. 24(2000) no.77, S.30-34
  16. Ro, J.S.: ¬An evaluation of the applicability of ranking algorithms to improve the effectiveness of full-text retrieval : 1. On the effectiveness of full-text retrieval (1988) 0.09
    0.08819669 = product of:
      0.22049172 = sum of:
        0.0125651695 = weight(_text_:information in 4030) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=4030,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 4030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4030)
        0.052761722 = weight(_text_:retrieval in 4030) [ClassicSimilarity], result of:
          0.052761722 = score(doc=4030,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5671716 = fieldWeight in 4030, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4030)
        0.119293325 = weight(_text_:ranking in 4030) [ClassicSimilarity], result of:
          0.119293325 = score(doc=4030,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.71714264 = fieldWeight in 4030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.09375 = fieldNorm(doc=4030)
        0.035871506 = product of:
          0.07174301 = sum of:
            0.07174301 = weight(_text_:evaluation in 4030) [ClassicSimilarity], result of:
              0.07174301 = score(doc=4030,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.556144 = fieldWeight in 4030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4030)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Source
    Journal of the American Society for Information Science. 39(1988), S.73-78
  17. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.09
    0.08784055 = product of:
      0.21960138 = sum of:
        0.005235487 = weight(_text_:information in 5697) [ClassicSimilarity], result of:
          0.005235487 = score(doc=5697,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.09697737 = fieldWeight in 5697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.015545071 = weight(_text_:retrieval in 5697) [ClassicSimilarity], result of:
          0.015545071 = score(doc=5697,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.16710453 = fieldWeight in 5697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.111145 = weight(_text_:ranking in 5697) [ClassicSimilarity], result of:
          0.111145 = score(doc=5697,freq=10.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.66815823 = fieldWeight in 5697, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.087675825 = sum of:
          0.06684261 = weight(_text_:evaluation in 5697) [ClassicSimilarity], result of:
            0.06684261 = score(doc=5697,freq=10.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.5181566 = fieldWeight in 5697, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.02083322 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.02083322 = score(doc=5697,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.4 = coord(4/10)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.605-620
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  18. Lewandowski, D.; Spree, U.: Ranking of Wikipedia articles in search engines revisited : fair ranking for reasonable quality? (2011) 0.08
    0.07868701 = product of:
      0.19671753 = sum of:
        0.007404097 = weight(_text_:information in 444) [ClassicSimilarity], result of:
          0.007404097 = score(doc=444,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
        0.015545071 = weight(_text_:retrieval in 444) [ClassicSimilarity], result of:
          0.015545071 = score(doc=444,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.16710453 = fieldWeight in 444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
        0.08609255 = weight(_text_:ranking in 444) [ClassicSimilarity], result of:
          0.08609255 = score(doc=444,freq=6.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.51755315 = fieldWeight in 444, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
        0.087675825 = sum of:
          0.06684261 = weight(_text_:evaluation in 444) [ClassicSimilarity], result of:
            0.06684261 = score(doc=444,freq=10.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.5181566 = fieldWeight in 444, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
          0.02083322 = weight(_text_:22 in 444) [ClassicSimilarity], result of:
            0.02083322 = score(doc=444,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.19345059 = fieldWeight in 444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=444)
      0.4 = coord(4/10)
    
    Abstract
    This paper aims to review the fiercely discussed question of whether the ranking of Wikipedia articles in search engines is justified by the quality of the articles. After an overview of current research on information quality in Wikipedia, a summary of the extended discussion on the quality of encyclopedic entries in general is given. On this basis, a heuristic method for evaluating Wikipedia entries is developed and applied to Wikipedia articles that scored highly in a search engine retrieval effectiveness test and compared with the relevance judgment of jurors. In all search engines tested, Wikipedia results are unanimously judged better by the jurors than other results on the corresponding results position. Relevance judgments often roughly correspond with the results from the heuristic evaluation. Cases in which high relevance judgments are not in accordance with the comparatively low score from the heuristic evaluation are interpreted as an indicator of a high degree of trust in Wikipedia. One of the systemic shortcomings of Wikipedia lies in its necessarily incoherent user model. A further tuning of the suggested criteria catalog, for instance, the different weighing of the supplied criteria, could serve as a starting point for a user model differentiated evaluation of Wikipedia articles. Approved methods of quality evaluation of reference works are applied to Wikipedia articles and integrated with the question of search engine evaluation.
    Date
    30. 9.2012 19:27:22
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.117-132
  19. Qin, T.; Zhang, X.-D.; Tsai, M.-F.; Wang, D.-S.; Liu, T.-Y.; Li, H.: Query-level loss functions for information retrieval (2008) 0.08
    0.07711129 = product of:
      0.19277821 = sum of:
        0.0090681305 = weight(_text_:information in 2066) [ClassicSimilarity], result of:
          0.0090681305 = score(doc=2066,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.16796975 = fieldWeight in 2066, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2066)
        0.02198405 = weight(_text_:retrieval in 2066) [ClassicSimilarity], result of:
          0.02198405 = score(doc=2066,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.23632148 = fieldWeight in 2066, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2066)
        0.14058854 = weight(_text_:ranking in 2066) [ClassicSimilarity], result of:
          0.14058854 = score(doc=2066,freq=16.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.8451607 = fieldWeight in 2066, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2066)
        0.021137487 = product of:
          0.042274974 = sum of:
            0.042274974 = weight(_text_:evaluation in 2066) [ClassicSimilarity], result of:
              0.042274974 = score(doc=2066,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.327711 = fieldWeight in 2066, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2066)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    Many machine learning technologies such as support vector machines, boosting, and neural networks have been applied to the ranking problem in information retrieval. However, since originally the methods were not developed for this task, their loss functions do not directly link to the criteria used in the evaluation of ranking. Specifically, the loss functions are defined on the level of documents or document pairs, in contrast to the fact that the evaluation criteria are defined on the level of queries. Therefore, minimizing the loss functions does not necessarily imply enhancing ranking performances. To solve this problem, we propose using query-level loss functions in learning of ranking functions. We discuss the basic properties that a query-level loss function should have and propose a query-level loss function based on the cosine similarity between a ranking list and the corresponding ground truth. We further design a coordinate descent algorithm, referred to as RankCosine, which utilizes the proposed loss function to create a generalized additive ranking model. We also discuss whether the loss functions of existing ranking algorithms can be extended to query-level. Experimental results on the datasets of TREC web track, OHSUMED, and a commercial web search engine show that with the use of the proposed query-level loss function we can significantly improve ranking accuracies. Furthermore, we found that it is difficult to extend the document-level loss functions to query-level loss functions.
    Source
    Information processing and management. 44(2008) no.2, S.838-855
  20. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.07
    0.07191082 = product of:
      0.17977704 = sum of:
        0.007404097 = weight(_text_:information in 2591) [ClassicSimilarity], result of:
          0.007404097 = score(doc=2591,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 2591, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2591)
        0.026924854 = weight(_text_:retrieval in 2591) [ClassicSimilarity], result of:
          0.026924854 = score(doc=2591,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.28943354 = fieldWeight in 2591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2591)
        0.08609255 = weight(_text_:ranking in 2591) [ClassicSimilarity], result of:
          0.08609255 = score(doc=2591,freq=6.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.51755315 = fieldWeight in 2591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2591)
        0.059355542 = sum of:
          0.02989292 = weight(_text_:evaluation in 2591) [ClassicSimilarity], result of:
            0.02989292 = score(doc=2591,freq=2.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.23172665 = fieldWeight in 2591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2591)
          0.029462622 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
            0.029462622 = score(doc=2591,freq=4.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.27358043 = fieldWeight in 2591, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2591)
      0.4 = coord(4/10)
    
    Abstract
    Purpose In a system-based approach, replicating the web would require large test collections, and judging the relevancy of all documents per topic in creating relevance judgment through human assessors is infeasible. Due to the large amount of documents that requires judgment, there are possible errors introduced by human assessors because of disagreements. The paper aims to discuss these issues. Design/methodology/approach This study explores exponential variation and document ranking methods that generate a reliable set of relevance judgments (pseudo relevance judgments) to reduce human efforts. These methods overcome problems with large amounts of documents for judgment while avoiding human disagreement errors during the judgment process. This study utilizes two key factors: number of occurrences of each document per topic from all the system runs; and document rankings to generate the alternate methods. Findings The effectiveness of the proposed method is evaluated using the correlation coefficient of ranked systems using mean average precision scores between the original Text REtrieval Conference (TREC) relevance judgments and pseudo relevance judgments. The results suggest that the proposed document ranking method with a pool depth of 100 could be a reliable alternative to reduce human effort and disagreement errors involved in generating TREC-like relevance judgments. Originality/value Simple methods proposed in this study show improvement in the correlation coefficient in generating alternate relevance judgment without human assessors while contributing to information retrieval evaluation.
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
    Source
    Aslib journal of information management. 67(2015) no.6, S.700-714

Authors

Languages

Types

  • a 16177
  • m 1265
  • s 699
  • el 606
  • r 138
  • b 71
  • i 53
  • x 38
  • n 36
  • p 16
  • d 15
  • ? 12
  • h 2
  • A 1
  • EL 1
  • l 1
  • pat 1
  • More… Less…

Themes

Subjects

Classifications