Search (4741 results, page 1 of 238)

  • × language_ss:"e"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.46
    0.4615261 = product of:
      0.6922891 = sum of:
        0.08091185 = product of:
          0.24273555 = sum of:
            0.24273555 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.24273555 = score(doc=230,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.24273555 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24273555 = score(doc=230,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.24273555 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.24273555 = score(doc=230,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.1259062 = weight(_text_:propose in 230) [ClassicSimilarity], result of:
          0.1259062 = score(doc=230,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.6418054 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.6666667 = coord(4/6)
    
    Abstract
    In this lecture I intend to challenge those who uphold a monist or even a dualist view of the universe; and I will propose, instead, a pluralist view. I will propose a view of the universe that recognizes at least three different but interacting sub-universes.
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.43
    0.4282504 = product of:
      0.6423756 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.18205166 = score(doc=2514,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.2574599 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.2574599 = score(doc=2514,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.2574599 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.2574599 = score(doc=2514,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.06677184 = weight(_text_:propose in 2514) [ClassicSimilarity], result of:
          0.06677184 = score(doc=2514,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 2514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.6666667 = coord(4/6)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.42
    0.41826025 = product of:
      0.5019123 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18205166 = score(doc=562,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18205166 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18205166 = score(doc=562,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.18205166 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18205166 = score(doc=562,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.06677184 = weight(_text_:propose in 562) [ClassicSimilarity], result of:
          0.06677184 = score(doc=562,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.031059656 = score(doc=562,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.8333333 = coord(5/6)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.35
    0.35398936 = product of:
      0.7079787 = sum of:
        0.10113981 = product of:
          0.30341944 = sum of:
            0.30341944 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.30341944 = score(doc=1826,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.30341944 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.30341944 = score(doc=1826,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.30341944 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.30341944 = score(doc=1826,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5 = coord(3/6)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.33
    0.32770604 = product of:
      0.49155906 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.18205166 = score(doc=400,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.18205166 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18205166 = score(doc=400,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.18205166 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.18205166 = score(doc=400,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.06677184 = weight(_text_:propose in 400) [ClassicSimilarity], result of:
          0.06677184 = score(doc=400,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(4/6)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  6. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.29
    0.29415226 = product of:
      0.44122836 = sum of:
        0.18205166 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18205166 = score(doc=563,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.18205166 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18205166 = score(doc=563,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06677184 = weight(_text_:propose in 563) [ClassicSimilarity], result of:
          0.06677184 = score(doc=563,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.031059656 = score(doc=563,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  7. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.29
    0.2901563 = product of:
      0.43523443 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.18205166 = score(doc=2918,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.18205166 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18205166 = score(doc=2918,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.18205166 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.18205166 = score(doc=2918,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
              0.031341594 = score(doc=2918,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  8. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.29
    0.2855003 = product of:
      0.4282504 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.121367775 = score(doc=5820,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.17163995 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.17163995 = score(doc=5820,freq=4.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.044514563 = weight(_text_:propose in 5820) [ClassicSimilarity], result of:
          0.044514563 = score(doc=5820,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.6666667 = coord(4/6)
    
    Abstract
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  9. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.27
    0.2730884 = product of:
      0.40963256 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.15170972 = score(doc=4997,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4997,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.15170972 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.15170972 = score(doc=4997,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.055643205 = weight(_text_:propose in 4997) [ClassicSimilarity], result of:
          0.055643205 = score(doc=4997,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.6666667 = coord(4/6)
    
    Abstract
    While classifications are heavily used to categorize web content, the evolution of the web foresees a more formal structure - ontology - which can serve this purpose. Ontologies are core artifacts of the Semantic Web which enable machines to use inference rules to conduct automated reasoning on data. Lightweight ontologies bridge the gap between classifications and ontologies. A lightweight ontology (LO) is an ontology representing a backbone taxonomy where the concept of the child node is more specific than the concept of the parent node. Formal lightweight ontologies can be generated from their informal ones. The key applications of formal lightweight ontologies are document classification, semantic search, and data integration. However, these applications suffer from the following problems: the disambiguation accuracy of the state of the art NLP tools used in generating formal lightweight ontologies from their informal ones; the lack of background knowledge needed for the formal lightweight ontologies; and the limitation of ontology reuse. In this dissertation, we propose a novel solution to these problems in formal lightweight ontologies; namely, faceted lightweight ontology (FLO). FLO is a lightweight ontology in which terms, present in each node label, and their concepts, are available in the background knowledge (BK), which is organized as a set of facets. A facet can be defined as a distinctive property of the groups of concepts that can help in differentiating one group from another. Background knowledge can be defined as a subset of a knowledge base, such as WordNet, and often represents a specific domain.
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  10. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.25
    0.24779256 = product of:
      0.4955851 = sum of:
        0.070797876 = product of:
          0.21239361 = sum of:
            0.21239361 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.21239361 = score(doc=306,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.21239361 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21239361 = score(doc=306,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.21239361 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.21239361 = score(doc=306,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  11. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.21
    0.21239361 = product of:
      0.42478722 = sum of:
        0.060683887 = product of:
          0.18205166 = sum of:
            0.18205166 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18205166 = score(doc=862,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18205166 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18205166 = score(doc=862,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18205166 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18205166 = score(doc=862,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  12. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.18
    0.17699468 = product of:
      0.35398936 = sum of:
        0.050569907 = product of:
          0.15170972 = sum of:
            0.15170972 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.15170972 = score(doc=76,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.15170972 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.15170972 = score(doc=76,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.15170972 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.15170972 = score(doc=76,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.5 = coord(3/6)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  13. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.14
    0.14159574 = product of:
      0.28319147 = sum of:
        0.040455926 = product of:
          0.121367775 = sum of:
            0.121367775 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.121367775 = score(doc=701,freq=2.0), product of:
                0.32392493 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038207654 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.121367775 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.121367775 = score(doc=701,freq=2.0), product of:
            0.32392493 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038207654 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  14. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.05
    0.04686617 = product of:
      0.1405985 = sum of:
        0.119892076 = weight(_text_:forschung in 4888) [ClassicSimilarity], result of:
          0.119892076 = score(doc=4888,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.64500517 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.020706438 = product of:
          0.062119313 = sum of:
            0.062119313 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.062119313 = score(doc=4888,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Mit einem Überblick über: Probleme, Methoden, Stand der Forschung u. Literatur.
    Date
    1. 3.2013 14:56:22
  15. Lam, W.; Wong, K.-F.; Wong, C.-Y.: Chinese document indexing based on new partitioned signature file : model and evaluation (2001) 0.03
    0.03495895 = product of:
      0.104876846 = sum of:
        0.09442965 = weight(_text_:propose in 303) [ClassicSimilarity], result of:
          0.09442965 = score(doc=303,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.48135406 = fieldWeight in 303, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=303)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 303) [ClassicSimilarity], result of:
              0.031341594 = score(doc=303,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=303)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article we investigate the use of signature files in Chinese information retrieval system and propose a new partitioning method for Chinese signature file based on the characteristic of Chinese words. Our partitioning method, called Partitioned Signature File for Chinese (PSFC), offers faster search efficiency than the traditional single signature file approach. We devise a general scheme for controlling the trade-off between the false drop and storage overhead while maintaining the search space reduction in PSFC. An analytical study is presented to support the claims of our method. We also propose two new hashing methods for Chinese signature files so that the signature file will be more suitable for dynamic environment while the retrieval performance is maintained. Furthermore, we have implemented PSFC and the new hashing methods, and we evaluated them using a large-scale real-world Chinese document corpus, namely, the TREC-5 (Text REtrieval Conference) Chinese collection. The experimental results confirm the features of PSFC and demonstrate its superiority over the traditional single signature file method
    Date
    29. 9.2001 14:01:34
  16. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.03
    0.03495895 = product of:
      0.104876846 = sum of:
        0.09442965 = weight(_text_:propose in 3464) [ClassicSimilarity], result of:
          0.09442965 = score(doc=3464,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.48135406 = fieldWeight in 3464, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.0104471985 = product of:
          0.031341594 = sum of:
            0.031341594 = weight(_text_:29 in 3464) [ClassicSimilarity], result of:
              0.031341594 = score(doc=3464,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23319192 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Date
    1. 6.2010 9:29:57
  17. Wang, Y.; Lee, J.-S.; Choi, I.-C.: Indexing by Latent Dirichlet Allocation and an Ensemble Model (2016) 0.03
    0.03492762 = product of:
      0.104782864 = sum of:
        0.09442965 = weight(_text_:propose in 3019) [ClassicSimilarity], result of:
          0.09442965 = score(doc=3019,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.48135406 = fieldWeight in 3019, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=3019)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 3019) [ClassicSimilarity], result of:
              0.031059656 = score(doc=3019,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 3019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3019)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The contribution of this article is twofold. First, we present Indexing by latent Dirichlet allocation (LDI), an automatic document indexing method. Many ad hoc applications, or their variants with smoothing techniques suggested in LDA-based language modeling, can result in unsatisfactory performance as the document representations do not accurately reflect concept space. To improve document retrieval performance, we introduce a new definition of document probability vectors in the context of LDA and present a novel scheme for automatic document indexing based on LDA. Second, we propose an Ensemble Model (EnM) for document retrieval. EnM combines basic indexing models by assigning different weights and attempts to uncover the optimal weights to maximize the mean average precision. To solve the optimization problem, we propose an algorithm, which is derived based on the boosting method. The results of our computational experiments on benchmark data sets indicate that both the proposed approaches are viable options for document retrieval.
    Date
    12. 6.2016 21:39:22
  18. Zhao, G.; Wu, J.; Wang, D.; Li, T.: Entity disambiguation to Wikipedia using collective ranking (2016) 0.03
    0.03492762 = product of:
      0.104782864 = sum of:
        0.09442965 = weight(_text_:propose in 3266) [ClassicSimilarity], result of:
          0.09442965 = score(doc=3266,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.48135406 = fieldWeight in 3266, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=3266)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 3266) [ClassicSimilarity], result of:
              0.031059656 = score(doc=3266,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 3266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3266)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Entity disambiguation is a fundamental task of semantic Web annotation. Entity Linking (EL) is an essential procedure in entity disambiguation, which aims to link a mention appearing in a plain text to a structured or semi-structured knowledge base, such as Wikipedia. Existing research on EL usually annotates the mentions in a text one by one and treats entities independent to each other. However this might not be true in many application scenarios. For example, if two mentions appear in one text, they are likely to have certain intrinsic relationships. In this paper, we first propose a novel query expansion method for candidate generation utilizing the information of co-occurrences of mentions. We further propose a re-ranking model which can be iteratively adjusted based on the prediction in the previous round. Experiments on real-world data demonstrate the effectiveness of our proposed methods for entity disambiguation.
    Date
    24.10.2016 19:22:54
  19. Harej, V.; Zumer, M.: Analysis of FRBR user tasks (2013) 0.03
    0.034319576 = product of:
      0.102958724 = sum of:
        0.089029126 = weight(_text_:propose in 1955) [ClassicSimilarity], result of:
          0.089029126 = score(doc=1955,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.45382494 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0625 = fieldNorm(doc=1955)
        0.013929598 = product of:
          0.041788794 = sum of:
            0.041788794 = weight(_text_:29 in 1955) [ClassicSimilarity], result of:
              0.041788794 = score(doc=1955,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.31092256 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1955)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    FRBR, FRAD, and FRSAD models propose user tasks as a way to address and categorize functions that a catalog should support. The user tasks are not harmonized among these models, but to do that, they should first be fully understood and analyzed, especially "select" and "identify." We decided to look at the FRBR user tasks from the perspective of interactive information retrieval (IIR). Several IIR models were reviewed and Ellis' and Belkin's models were chosen for further analysis and interpretation of FRBR "select" and "identify" tasks.
    Date
    29. 5.2015 19:13:13
  20. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.03
    0.034277808 = product of:
      0.10283342 = sum of:
        0.089029126 = weight(_text_:propose in 1422) [ClassicSimilarity], result of:
          0.089029126 = score(doc=1422,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.45382494 = fieldWeight in 1422, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0625 = fieldNorm(doc=1422)
        0.013804292 = product of:
          0.041412875 = sum of:
            0.041412875 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.041412875 = score(doc=1422,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    We propose a novel approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. The ability of the logic to handle expressive representations along with the use of such classical notions are promising characteristics for IR systems. The approach proposed here has been efficiently implemented and experiments against test collections are presented.
    Date
    22. 3.2003 19:27:23

Types

  • a 4198
  • m 326
  • s 176
  • el 171
  • b 34
  • r 21
  • i 18
  • x 15
  • p 5
  • n 3
  • ? 1
  • d 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications