Search (9229 results, page 1 of 462)

  • × language_ss:"e"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.53
    0.52599186 = product of:
      0.92048573 = sum of:
        0.057530362 = product of:
          0.2876518 = sum of:
            0.2876518 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.2876518 = score(doc=1826,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.2 = coord(1/5)
        0.2876518 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.2876518 = score(doc=1826,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.2876518 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.2876518 = score(doc=1826,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
        0.2876518 = weight(_text_:2f in 1826) [ClassicSimilarity], result of:
          0.2876518 = score(doc=1826,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.5714286 = coord(4/7)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.44
    0.43814868 = product of:
      0.7667602 = sum of:
        0.034518216 = product of:
          0.17259108 = sum of:
            0.17259108 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.17259108 = score(doc=2514,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.2 = coord(1/5)
        0.24408065 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.24408065 = score(doc=2514,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.24408065 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.24408065 = score(doc=2514,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.24408065 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.24408065 = score(doc=2514,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.42
    0.4207935 = product of:
      0.7363886 = sum of:
        0.04602429 = product of:
          0.23012145 = sum of:
            0.23012145 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.23012145 = score(doc=230,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.2 = coord(1/5)
        0.23012145 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23012145 = score(doc=230,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23012145 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23012145 = score(doc=230,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.23012145 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.23012145 = score(doc=230,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.41
    0.4050102 = product of:
      0.5670143 = sum of:
        0.034518216 = product of:
          0.17259108 = sum of:
            0.17259108 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17259108 = score(doc=562,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.2 = coord(1/5)
        0.17259108 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17259108 = score(doc=562,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17259108 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17259108 = score(doc=562,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17259108 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17259108 = score(doc=562,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.0294456 = score(doc=562,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.39
    0.39343736 = product of:
      0.5508123 = sum of:
        0.018316243 = product of:
          0.045790605 = sum of:
            0.02197135 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
              0.02197135 = score(doc=563,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
            0.023819257 = weight(_text_:system in 563) [ClassicSimilarity], result of:
              0.023819257 = score(doc=563,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.4 = coord(2/5)
        0.17259108 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.17259108 = score(doc=563,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.17259108 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.17259108 = score(doc=563,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.17259108 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.17259108 = score(doc=563,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.0294456 = score(doc=563,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  6. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.37
    0.36819437 = product of:
      0.6443401 = sum of:
        0.040271256 = product of:
          0.20135628 = sum of:
            0.20135628 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.20135628 = score(doc=306,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.2 = coord(1/5)
        0.20135628 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20135628 = score(doc=306,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20135628 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20135628 = score(doc=306,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.20135628 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.20135628 = score(doc=306,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  7. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.34
    0.34076422 = product of:
      0.5963374 = sum of:
        0.07856413 = product of:
          0.19641033 = sum of:
            0.17259108 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.17259108 = score(doc=2918,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
            0.023819257 = weight(_text_:system in 2918) [ClassicSimilarity], result of:
              0.023819257 = score(doc=2918,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.4 = coord(2/5)
        0.17259108 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17259108 = score(doc=2918,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17259108 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17259108 = score(doc=2918,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.17259108 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.17259108 = score(doc=2918,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.5714286 = coord(4/7)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
    Source
    System Sciences, 2009. HICSS '09. 42nd Hawaii International Conference
  8. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.33
    0.3338872 = product of:
      0.58430254 = sum of:
        0.09614123 = product of:
          0.16023538 = sum of:
            0.115060724 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.115060724 = score(doc=5820,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
            0.029295133 = weight(_text_:retrieval in 5820) [ClassicSimilarity], result of:
              0.029295133 = score(doc=5820,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 5820, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
            0.015879504 = weight(_text_:system in 5820) [ClassicSimilarity], result of:
              0.015879504 = score(doc=5820,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.6 = coord(3/5)
        0.16272044 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.16272044 = score(doc=5820,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.16272044 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.16272044 = score(doc=5820,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.16272044 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.16272044 = score(doc=5820,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5714286 = coord(4/7)
    
    Abstract
    The successes of information retrieval (IR) in recent decades were built upon bag-of-words representations. Effective as it is, bag-of-words is only a shallow text understanding; there is a limited amount of information for document ranking in the word space. This dissertation goes beyond words and builds knowledge based text representations, which embed the external and carefully curated information from knowledge bases, and provide richer and structured evidence for more advanced information retrieval systems. This thesis research first builds query representations with entities associated with the query. Entities' descriptions are used by query expansion techniques that enrich the query with explanation terms. Then we present a general framework that represents a query with entities that appear in the query, are retrieved by the query, or frequently show up in the top retrieved documents. A latent space model is developed to jointly learn the connections from query to entities and the ranking of documents, modeling the external evidence from knowledge bases and internal ranking features cooperatively. To further improve the quality of relevant entities, a defining factor of our query representations, we introduce learning to rank to entity search and retrieve better entities from knowledge bases. In the document representation part, this thesis research also moves one step forward with a bag-of-entities model, in which documents are represented by their automatic entity annotations, and the ranking is performed in the entity space.
    This proposal includes plans to improve the quality of relevant entities with a co-learning framework that learns from both entity labels and document labels. We also plan to develop a hybrid ranking system that combines word based and entity based representations together with their uncertainties considered. At last, we plan to enrich the text representations with connections between entities. We propose several ways to infer entity graph representations for texts, and to rank documents using their structure representations. This dissertation overcomes the limitation of word based representations with external and carefully curated information from knowledge bases. We believe this thesis research is a solid start towards the new generation of intelligent, semantic, and structured information retrieval.
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  9. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.32
    0.31559512 = product of:
      0.55229145 = sum of:
        0.034518216 = product of:
          0.17259108 = sum of:
            0.17259108 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.17259108 = score(doc=400,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.2 = coord(1/5)
        0.17259108 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.17259108 = score(doc=400,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.17259108 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.17259108 = score(doc=400,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.17259108 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.17259108 = score(doc=400,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  10. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.32
    0.31559512 = product of:
      0.55229145 = sum of:
        0.034518216 = product of:
          0.17259108 = sum of:
            0.17259108 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.17259108 = score(doc=862,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.2 = coord(1/5)
        0.17259108 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17259108 = score(doc=862,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17259108 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17259108 = score(doc=862,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.17259108 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.17259108 = score(doc=862,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5714286 = coord(4/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  11. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.26
    0.26491702 = product of:
      0.46360478 = sum of:
        0.118422605 = product of:
          0.197371 = sum of:
            0.115060724 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.115060724 = score(doc=701,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.054806177 = weight(_text_:retrieval in 701) [ClassicSimilarity], result of:
              0.054806177 = score(doc=701,freq=28.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5001983 = fieldWeight in 701, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
            0.027504109 = weight(_text_:system in 701) [ClassicSimilarity], result of:
              0.027504109 = score(doc=701,freq=6.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24108742 = fieldWeight in 701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.6 = coord(3/5)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.115060724 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.115060724 = score(doc=701,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5714286 = coord(4/7)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  12. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.26
    0.26299593 = product of:
      0.46024287 = sum of:
        0.028765181 = product of:
          0.1438259 = sum of:
            0.1438259 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.1438259 = score(doc=4997,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.2 = coord(1/5)
        0.1438259 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.1438259 = score(doc=4997,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.1438259 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.1438259 = score(doc=4997,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
        0.1438259 = weight(_text_:2f in 4997) [ClassicSimilarity], result of:
          0.1438259 = score(doc=4997,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.5714286 = coord(4/7)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  13. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.26
    0.26299593 = product of:
      0.46024287 = sum of:
        0.028765181 = product of:
          0.1438259 = sum of:
            0.1438259 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.1438259 = score(doc=76,freq=2.0), product of:
                0.3070917 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.2 = coord(1/5)
        0.1438259 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.1438259 = score(doc=76,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.1438259 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.1438259 = score(doc=76,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.1438259 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.1438259 = score(doc=76,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.5714286 = coord(4/7)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  14. Peters, C.; Braschler, M.; Clough, P.: Multilingual information retrieval : from research to practice (2012) 0.21
    0.21276073 = product of:
      0.37233126 = sum of:
        0.028414998 = product of:
          0.07103749 = sum of:
            0.048580483 = weight(_text_:retrieval in 361) [ClassicSimilarity], result of:
              0.048580483 = score(doc=361,freq=22.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.44337842 = fieldWeight in 361, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=361)
            0.022457011 = weight(_text_:system in 361) [ClassicSimilarity], result of:
              0.022457011 = score(doc=361,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19684705 = fieldWeight in 361, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=361)
          0.4 = coord(2/5)
        0.16272044 = weight(_text_:mehrsprachigkeit in 361) [ClassicSimilarity], result of:
          0.16272044 = score(doc=361,freq=4.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.5298757 = fieldWeight in 361, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.14093956 = weight(_text_:abfrage in 361) [ClassicSimilarity], result of:
          0.14093956 = score(doc=361,freq=4.0), product of:
            0.28580084 = queryWeight, product of:
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.03622214 = queryNorm
            0.49313906 = fieldWeight in 361, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.03125 = fieldNorm(doc=361)
        0.040256247 = product of:
          0.080512494 = sum of:
            0.080512494 = weight(_text_:zugriff in 361) [ClassicSimilarity], result of:
              0.080512494 = score(doc=361,freq=4.0), product of:
                0.2160124 = queryWeight, product of:
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3727216 = fieldWeight in 361, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=361)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings.Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations.The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
    Content
    Inhalt: 1 Introduction 2 Within-Language Information Retrieval 3 Cross-Language Information Retrieval 4 Interaction and User Interfaces 5 Evaluation for Multilingual Information Retrieval Systems 6 Applications of Multilingual Information Access
    RSWK
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
    Subject
    Information-Retrieval-System / Mehrsprachigkeit / Abfrage / Zugriff
  15. Fensel, D.: Ontologies : a silver bullet for knowledge management and electronic commerce (2004) 0.08
    0.08450886 = product of:
      0.19718733 = sum of:
        0.003661892 = product of:
          0.01830946 = sum of:
            0.01830946 = weight(_text_:retrieval in 1949) [ClassicSimilarity], result of:
              0.01830946 = score(doc=1949,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1949)
          0.2 = coord(1/5)
        0.17617446 = weight(_text_:abfrage in 1949) [ClassicSimilarity], result of:
          0.17617446 = score(doc=1949,freq=4.0), product of:
            0.28580084 = queryWeight, product of:
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.03622214 = queryNorm
            0.61642385 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1949)
        0.017350987 = product of:
          0.034701973 = sum of:
            0.034701973 = weight(_text_:22 in 1949) [ClassicSimilarity], result of:
              0.034701973 = score(doc=1949,freq=4.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27358043 = fieldWeight in 1949, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1949)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The author systematically introduces the notion of ontologies to the non-expert reader and demonstrates in detail how to apply this conceptual framework for improved intranet retrieval of corporate information and knowledge and for enhanced Internetbased electronic commerce. He also describes ontology languages (XML, RDF, and OWL) and ontology tools, and the application of ontologies. In addition to structural improvements, the second edition covers recent developments relating to the Semantic Web, and emerging web-based standard languages.
    Classification
    004.67/8 22
    DDC
    004.67/8 22
    RSWK
    World Wide Web / Datenbanksystem / Abfrage / Inferenz <Künstliche Intelligenz>
    Subject
    World Wide Web / Datenbanksystem / Abfrage / Inferenz <Künstliche Intelligenz>
  16. Weiße, A.: AG Dezimalklassifikation (AG DK) (2002) 0.06
    0.060567256 = product of:
      0.1413236 = sum of:
        0.009158121 = product of:
          0.022895303 = sum of:
            0.010985675 = weight(_text_:retrieval in 1693) [ClassicSimilarity], result of:
              0.010985675 = score(doc=1693,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.10026272 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1693)
            0.011909628 = weight(_text_:system in 1693) [ClassicSimilarity], result of:
              0.011909628 = score(doc=1693,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.104393914 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1693)
          0.4 = coord(2/5)
        0.074744485 = weight(_text_:abfrage in 1693) [ClassicSimilarity], result of:
          0.074744485 = score(doc=1693,freq=2.0), product of:
            0.28580084 = queryWeight, product of:
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.03622214 = queryNorm
            0.26152647 = fieldWeight in 1693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1693)
        0.057421 = sum of:
          0.042698197 = weight(_text_:zugriff in 1693) [ClassicSimilarity], result of:
            0.042698197 = score(doc=1693,freq=2.0), product of:
              0.2160124 = queryWeight, product of:
                5.963546 = idf(docFreq=308, maxDocs=44218)
                0.03622214 = queryNorm
              0.19766548 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.963546 = idf(docFreq=308, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1693)
          0.0147228 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
            0.0147228 = score(doc=1693,freq=2.0), product of:
              0.12684377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03622214 = queryNorm
              0.116070345 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1693)
      0.42857143 = coord(3/7)
    
    Abstract
    Die diesjährige Öffentliche Sitzung der AG Dezimalklassifikationen fand am 23.7.02 im Rahmen der 26. Jahrestagung der Gesellschaft für Klassifikation in Mannheim statt. Das Programm bot ein breites Spektrum der klassifikatorischen Sacherschließung. In ihrem Vortrag unter dem Thema "Aktuelle Anwendungen und Entwicklungen der DDC" berichtete Frau Esther Scheven (DDB Frankfurt) über die Projekte "RENARDUS" und "DDC Deutsch". Das EU-Projekt RENARDUS (http://www.renardus.org) ist ein Recherchesystem für Internet-Fachinformationsführer mit einer gemeinsamen Suchoberfläche, in dem die DDC als CrossBrowse-Instrument eingesetzt wird. Die RENARDUS-Partner haben jeweils eine Konkordanztabelle von ihrer lokalen Klassifikation zur DDC erstellt. Das Klassifikationssystem DDC wurde deshalb bevorzugt, weil es online verfügbar ist, in verschiedene europäische Sprachen übersetzt wurde, einen weltweiten Anwenderkreis habe und zum Cross-Browsen besser geeignet sei als das Klassifikationssystem UDC. Anhand von geographischen Schlagwörtern wurde untersucht, inwieweit DDC-Notationen die Interoperabilität der Erschließung mit Schlagwörtern verschiedener Gateways mit unterschiedlichen Sprachen verbessern. Die DDC wird als Suchelement im Geographischen Index benutzt. Zum Projekt "DDC Deutsch" (http://www.ddb.de/professionell/ ddc info.htm) gab Frau Scheven einen kurzen Überblick über die wichtigsten Tätigkeiten und Entwicklungen. Dazu gehören die Übersetzung der 22. Auflage der DDC, die Überarbeitung der Table 2 für Deutschland und Österreich, die Überarbeitung der Sachgebiete Geschichte Deutschland, Geschichte Österreich, Parteiensysteme Deutschland. Nach Abschluß der Arbeiten wird ab 2005 in der Deutschen Nationalbibliographie (Reihe A, B und H) die DDC als Klassifikationssystem für wissenschaftliche relevante bzw. international wichtige Publikationen verwendet. Desweiteren ist vorgesehen, Schlagwörter der Schlagwortnormdatei (SWD) mit DDCNotationen zu verknüpfen. Herr Dr. Holger Flachmann (ULB Münster) stellte in seinem Vortrag "Digitale Reproduktion systematischer Zettelkataloge: Praxis, Gewinn und Grenzen am Beispiel des UDK-Kataloges der ULB Münster" ein interessantes und nachahmenswertes Projekt vor. In der ULB Münster wurde der bis 1990 nach UDK geführte Systematische Katalog (1,4 Millionen Katalogkarten mit 80 000 Systemstellen) als Imagekatalog konvertiert. Die elektronisch erfassten Systemstellen (UDK-Notationen) sind recherchierbar und mit den dazugehörigen Titeln verknüpft, die als digitale Bilder der Katalogzettel gespeichert sind. Ebenso ist die Suche mit Stichwörtern möglich.
    Das Recherchesystem bietet eine Übersicht der Fachgebiete nach UDK, ein Register der Fächer (z.B. Register A-Z Wirtschaftswissenschaften), eine alphabetische Auflistung der Fächer und die systematische Gliederung der Fachgebiete mit Anzeige der Anzahl der Titelnachweise. Der von der Firma Mikro Univers GmbH Berlin erstellte digitalisierte Katalog wurde ab 1.8.02 frei geschaltet (http://altkataloge.uni-muenster.de/de/index sys.htm). Das von der ULB Münster realisierte Projekt könnte für andere Universitätsbibliotheken Anregung sein, konventionelle Systematische Zettelkataloge, die vor allem ältere wissenschaftliche Literatur nachweisen und erfahrungsgemäß in dieser Form wenig genutzt werden, in digitalisierter Form einem über die Grenzen der jeweiligen Universitätsbibliothek erweiterten Nutzerkreis zur Verfügung zu stellen. Herr Dr. Jiri Pika (ETH Zürich) referierte in seinem Vortrag "Anwendung der UDK in NEBIS in der Schweiz: ein Ausblick" über die seit etwa 20 Jahren praktizierte inhaltliche Erschließung von Dokumenten nach UDK im neuen Bibliothekssystem NEBIS (Netzwerk von Bibliotheken und Informationsstellen in der Schweiz http://www.nebis.ch/index.html), das das Bibliothekssystem ETHICS ablöste. Wie im System ETHICS beruht die Abfrage auf Deskriptoren in drei Sprachen (deutsch, englisch, französisch) und einer Anzahl weiterer Begriffe (Synonyme, Akronyme), womit ein benutzerfreundlicher den Gegebenheiten der multilingualen Schweiz angepasster Einstieg in die Recherche ermöglicht wird. Hinter den Deskriptoren steht eine einzige DK-Zahl, die im Hintergrund die mit ihr verknüpften Titel zusammenführt. Das von Fachreferenten gepflegte Sachregister beinhaltet etwa 63 513 Grundbegriffe (Stand 27.3.2002) mit einer DK-Zahl, die Anzahl der Zusatzbegriffe beträgt das 5 - 8fache. Die Vorzüge des Systems bestehen darin, dass unter Verwendung von Deskriptoren bei der Recherche der Zugriff sowohl auf gedruckte als auch auf elektronische Medien (etwa 2 Millionen Dokumente) möglich ist, ohne dass Kenntnisse des Klassifikationssystems erforderlich sind. Mit der Umstellung von ETHICS auf NEBIS im Jahre 1999 wurde eine neue Benutzeroberfläche angeboten und eine wesentliche Verkürzung der Antwortzeiten erreicht. Zum Anwenderkreis gehören etwa 60 Bibliotheken von Hochschulen, Fachhochschulen und Forschungsanstalten aus allen Regionen der Schweiz, z.B. ETH Zürich, Ecole Polytechnique Federale de Lausanne, weitere Schweizer Verbundbibliotheken, sowie in der Suchmaschine GERHARD (German Harvest Automated Retrieval Directory http://www.gerhard.de/gerold/owa/gerhard.create_ Index html?form-language=99). Der im Programm ausgewiesene Vortrag von Herrn Dr. Bernd Lorenz (München) zum Thema "Konkordanz RVK (Regensburger Verbundklassifikation) - DDC (Dewey Decimal Classification): Notwendigkeit und Vorüberlegungen" wurde vertretungsweise von Herrn Dr. Hans-Joachim Hermes (Chemnitz) übernommen. Bereits seit Beginn der Überlegungen, die DDC in der Deutschen Nationalbibliographie als Klassifikationssystem zu verwenden, werden auch Vorstellungen über eine Konkordanz zwischen den beiden Klassifikationssystemen RVK und DDC geäußert. Die geplante Konkordanz wird die Nutzung beider Klassifikationssysteme vergrößern. In der Darstellung wurden Probleme auf der sprachlichterminologischen Ebene, die bei den Benennungen der Notationen sichtbar werden, aufgezeigt. Nach den Fachvorträgen fand die turnusmäßige Neuwahl des Leiters der AG Dezimalklassifikationen statt. Zum neuen Vorsitzenden wurde Herr Dr. Bernd Lorenz (Bayerische Beamtenfachhochschule München) gewählt.
  17. Fensel, D.: Ontologies : a silver bullet for knowledge management and electronic commerce (2001) 0.05
    0.05181519 = product of:
      0.18135315 = sum of:
        0.005178697 = product of:
          0.025893483 = sum of:
            0.025893483 = weight(_text_:retrieval in 163) [ClassicSimilarity], result of:
              0.025893483 = score(doc=163,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23632148 = fieldWeight in 163, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=163)
          0.2 = coord(1/5)
        0.17617446 = weight(_text_:abfrage in 163) [ClassicSimilarity], result of:
          0.17617446 = score(doc=163,freq=4.0), product of:
            0.28580084 = queryWeight, product of:
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.03622214 = queryNorm
            0.61642385 = fieldWeight in 163, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              7.890225 = idf(docFreq=44, maxDocs=44218)
              0.0390625 = fieldNorm(doc=163)
      0.2857143 = coord(2/7)
    
    Abstract
    Ontologies have been developed and investigated for quite a while now in artificial intelligente and natural language processing to facilitate knowledge sharing and reuse. More recently, the notion of ontologies has attracied attention from fields such as intelligent information integration, cooperative information systems, information retrieval, electronic commerce, and knowledge management. The author systematicaliy introduces the notion of ontologies to the non-expert reader and demonstrates in detail how to apply this conceptual framework for improved intranet retrieval of corporate information and knowledge and for enhanced Internet-based electronic commerce. In the second part of the book, the author presents a more technical view an emerging Web standards, like XML, RDF, XSL-T, or XQL, allowing for structural and semantic modeling and description of data and information.
    RSWK
    World Wide Web / Datenbanksystem / Abfrage / Inferenz <Künstliche Intelligenz>
    Subject
    World Wide Web / Datenbanksystem / Abfrage / Inferenz <Künstliche Intelligenz>
  18. Jahns, Y.; Trummer, M.: Sacherschließung - Informationsdienstleistung nach Maß : Kann Heterogenität beherrscht werden? (2004) 0.04
    0.042777516 = product of:
      0.09981421 = sum of:
        0.0074209156 = product of:
          0.018552288 = sum of:
            0.0073237834 = weight(_text_:retrieval in 2789) [ClassicSimilarity], result of:
              0.0073237834 = score(doc=2789,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.06684181 = fieldWeight in 2789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2789)
            0.0112285055 = weight(_text_:system in 2789) [ClassicSimilarity], result of:
              0.0112285055 = score(doc=2789,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.098423526 = fieldWeight in 2789, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2789)
          0.4 = coord(2/5)
        0.057530362 = weight(_text_:mehrsprachigkeit in 2789) [ClassicSimilarity], result of:
          0.057530362 = score(doc=2789,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.18733935 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.03486293 = product of:
          0.06972586 = sum of:
            0.06972586 = weight(_text_:zugriff in 2789) [ClassicSimilarity], result of:
              0.06972586 = score(doc=2789,freq=12.0), product of:
                0.2160124 = queryWeight, product of:
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3227864 = fieldWeight in 2789, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2789)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    Dieses DDC-Tool ermöglicht den Zugriff auf lokale, mit DDC-erschlossene Titeldaten. Für einige bereits übersetzte DDC-Klassen kann mithilfe eines Browsers gearbeitet werden. Auch die gezielte verbale Suche nach DDC-Elementen ist möglich. Die Frage nach Aspekten, wie z. B. geografischen, soll durch getrennte Ablage der Notationselemente in den Titeldatensätzen ermöglicht werden. Schließlich lassen sich künftig auch integrierte Suchen über DDC und SWD oder andere Erschließungssysteme denken, um Literatur zu einem Thema zu finden. Das von Lars Svensson vorgestellte Retrieval-Interface bietet eine zentrale Lösung: nicht für jeden lokalen OPAC müssen eigene Suchstrukturen entwickelt werden, um auf DDC-Daten zugreifen zu können. Wie Datenbestände mit verschiedenen Erschließungen unter einer Oberfläche zusammengeführt werden und dabei die DDC als Meta-Ebene genutzt wird, das ist heute schon im Subject Gateway Renardus sichtbar." Der Renardus-Broker ermöglicht das Cross-Browsen und Cross-Searchen über verteilte Internetquellen in Europa. Für die Navigation über die DDC mussten zunächst Crosswalks zwischen den lokalen Klassifikationsklassen und der DDC erstellt werden. Das an der Universitätsbibliothek Regensburg entwickelte Tool CarmenX wurde dazu von der Niedersächsischen Staats- und Universitätsbibliothek Göttingen weiterentwickelt und ermöglicht den Zugriff auf die ver schiedenen Klassifikationssysteme. Über diese Entwicklungen berichtete Dr. Friedrich Geißelmann, Universitäsbibliothek Regensburg. Er leitete das CARMEN-Teilprojekt »Grosskonkordanzen zwischen Thesauri und Klassifikationen«, in dem das Werkzeug CarmenX entstand. In diesem CARMEN-Arbeitspaket erfolgten sowohl grundlegende methodische Untersuchungen zu Crosskonkordanzen als auch prototypische Umsetzungen in den Fachgebieten Mathematik, Physik und Sozialwissenschaften. Ziel war es, bei Recherchen in verteilten Datenbanken mit unterschiedlichen Klassifikationen und Thesauri von einem vertrauten System auszugehen und in weitere wechseln zu können, ohne genaue Kenntnis von den Systemen haben zu müssen. So wurden z. B. im Bereich Crosskonkordanzen zwischen Allgemein- und Fachklassifikationen die RVK und die Mathematical Subject Classification (MSC) und Physics and Astronomy Classification Scheme (PACS) ausgewählt.
    Katja Heyke, Universitäts- und Stadtbibliothek Köln, und Manfred Faden, Bibliothek des HWWA-Instituts für Wirtschaftsforschung Hamburg, stellten ähnliche Entwicklungen für den Fachbereich Wirtschaftswissenschaften vor. Hier wird eine Crosskonkordanz zwischen dem Standard Thesaurus Wirtschaft (STW) und dem Bereich Wirtschaft der SWD aufgebaut." Diese Datenbank soll den Zugriff auf die mit STW und SWD erschlossenen Bestände ermöglichen. Sie wird dazu weitergegeben an die virtuelle Fachbibliothek EconBiz und an den Gemeinsamen Bibliotheksverbund. Die Crosskonkordanz Wirtschaft bietet aber auch die Chance zur kooperativen Sacherschließung, denn sie eröffnet die Möglichkeit der gegenseitigen Übernahme von Sacherschließungsdaten zwischen den Partnern Die Deutsche Bibliothek, Universitäts- und Stadtbibliothek Köln, HWWA und Bibliothek des Instituts für Weltwirtschaft Kiel. Am Beispiel der Wirtschaftswissenschaften zeigt sich der Gewinn solcher KonkordanzProjekte für Indexierer und Benutzer. Der Austausch über die Erschließungsregeln und die systematische Analyse der Normdaten führen zur Bereinigung von fachlichen Schwachstellen und Inkonsistenzen in den Systemen. Die Thesauri werden insgesamt verbessert und sogar angenähert. Die Vortragsreihe schloss mit einem Projekt, das die Heterogenität der Daten aus dem Blickwinkel der Mehrsprachigkeit betrachtet. Martin Kunz, Deutsche Bibliothek Frankfurt am Main, informierte über das Projekt MACS (Multilingual Access to Subject Headings). MACS bietet einen mehrsprachigen Zugriff auf Bibliothekskataloge. Dazu wurde eine Verbindung zwischen den Schlagwortnormdateien LCSH, RAMEAU und SWD erarbeitet. Äquivalente Vorzugsbezeichnungen der Normdateien werden intellektuell nachgewiesen und als Link abgelegt. Das Projekt beschränkte sich zunächst auf die Bereiche Sport und Theater und widmet sich in einer nächsten Stufe den am häufigsten verwendeten Schlagwörtern. MACS geht davon aus, dass ein Benutzer in der Sprache seiner Wahl (Deutsch, Englisch, Französisch) eine Schlagwortsuche startet, und ermöglicht ihm, seine Suche auf die affilierten Datenbanken im Ausland auszudehnen. Martin Kunz plädierte für einen Integrationsansatz, der auf dem gegenseitigen Respekt vor der Terminologie der kooperierenden Partner beruht. Er sprach sich dafür aus, in solchen Vorhaben den Begriff der Thesaurus föderation anzuwenden, der die Autonomie der Thesauri unterstreicht.
    Wie kann man den Suchenden einen einheitlichen Zugriff auf Inhalte bieten, ganz gleich, in welchem System und mit welcher Methode sie erschlossen sind? Diese Frage hat die Vortragsreihe in unterschiedlichen Ansätzen untersucht. Die bewährten Orientierungssysteme sind für den Zugriff der Benutzer auf verteilte und auch fachübergreifende Bestände nicht mehr ausreichend. Einheitliche und einfache Zugänge zu Informationen in der Onlinewelt erfordern die Integration der vorhandenen Klassifikationen und Thesauri. Solche Transferkomponenten können die verschiedenen Schalen der Inhaltserschließung - verschiedene Erschließungsqualitäten und -niveaus - zusammenführen. Sie helfen Konsistenzbrüche auszugleichen und unsere Erschließungsdaten optimal anzubieten."
  19. Knowledge organization and the global information society : Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK (2004) 0.03
    0.03315992 = product of:
      0.07737315 = sum of:
        0.012902393 = product of:
          0.03225598 = sum of:
            0.016376477 = weight(_text_:retrieval in 3356) [ClassicSimilarity], result of:
              0.016376477 = score(doc=3356,freq=10.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.14946283 = fieldWeight in 3356, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3356)
            0.015879504 = weight(_text_:system in 3356) [ClassicSimilarity], result of:
              0.015879504 = score(doc=3356,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 3356, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3356)
          0.4 = coord(2/5)
        0.057530362 = weight(_text_:mehrsprachigkeit in 3356) [ClassicSimilarity], result of:
          0.057530362 = score(doc=3356,freq=2.0), product of:
            0.3070917 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03622214 = queryNorm
            0.18733935 = fieldWeight in 3356, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.015625 = fieldNorm(doc=3356)
        0.0069403946 = product of:
          0.013880789 = sum of:
            0.013880789 = weight(_text_:22 in 3356) [ClassicSimilarity], result of:
              0.013880789 = score(doc=3356,freq=4.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.109432176 = fieldWeight in 3356, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3356)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    Inhalt: Session 1 A: Theoretical Foundations of Knowledge Organization 1 Hanne Albrechtsen, Hans H K Andersen, Bryan Cleal and Annelise Mark Pejtersen: Categorical complexity in knowledge integration: empirical evaluation of a cross-cultural film research collaboratory; Clare Beghtol: Naive classification systems and the global information society; Terence R Smith and Marcia L Zeng: Concept maps supported by knowledge organization structures; B: Linguistic and Cultural Approaches to Knowledge Organization 1 Rebecca Green and Lydia Fraser: Patterns in verbal polysemy; Maria J López-Huertas, MarioBarite and Isabel de Torres: Terminological representation of specialized areas in conceptual structures: the case of gender studies; Fidelia Ibekwe-SanJuan and Eric SanJuan: Mining for knowledge chunks in a terminology network Session 2 A: Applications of Artificial Intelligence and Knowledge Representation 1 Jin-Cheon Na, Haiyang Sui, Christopher Khoo, Syin Chan and Yunyun Zhou: Effectiveness of simple linguistic processing in automatic sentiment classification of product reviews; Daniel J O'Keefe: Cultural literacy in a global information society-specific language: an exploratory ontological analysis utilizing comparative taxonomy; Lynne C Howarth: Modelling a natural language gateway to metadata-enabled resources; B: Theoretical Foundations of Knowledge Organization 2: Facets & Their Significance Ceri Binding and Douglas Tudhope: Integrating faceted structure into the search process; Vanda Broughton and Heather Lane: The Bliss Bibliographic Classification in action: moving from a special to a universal faceted classification via a digital platform; Kathryn La Barre: Adventures in faceted classification: a brave new world or a world of confusion? Session 3 A: Theoretical Foundations of Knowledge Organization 3 Elin K Jacob: The structure of context: implications of structure for the creation of context in information systems; Uta Priss: A semiotic-conceptual framework for knowledge representation Giovanni M Sacco; Accessing multimedia infobases through dynamic taxonomies; Joseph T Tennis: URIS and intertextuality: incumbent philosophical commitments in the development of the semantic web; B: Social & Sociological Concepts in Knowledge Organization Grant Campbell: A queer eye for the faceted guy: how a universal classification principle can be applied to a distinct subculture; Jonathan Furner and Anthony W Dunbar: The treatment of topics relating to people of mixed race in bibliographic classification schemes: a critical ace-theoretic approach; H Peter Ohly: The organization of Internet links in a social science clearing house; Chern Li Liew: Cross-cultural design and usability of a digital library supporting access to Maori cultural heritage resources: an examination of knowledge organization issues; Session 4 A: Knowledge Organization of Universal and Special Systems 1: Dewey Decimal Classification Sudatta Chowdhury and G G Chowdhury: Using DDC to create a visual knowledge map as an aid to online information retrieval; Joan S Mitchell: DDC 22: Dewey in the world, the world in Dewey; Diane Vizine-Goetz and Julianne Beall: Using literary warrant to define a version of the DDCfor automated classification services; B: Applications in Knowledge Representation 2 Gerhard J A Riesthuis and Maja Zumer: FRBR and FRANAR: subject access; Victoria Frâncu: An interpretation of the FRBR model; Moshe Y Sachs and Richard P Smiraglia: From encyclopedism to domain-based ontology for knowledge management: the evolution of the Sachs Classification (SC); Session 5 A: Knowledge Organization of Universal and Special Systems 2 Ágnes Hajdu Barát: Knowledge organization of the Universal Decimal Classification: new solutions, user friendly methods from Hungary; Ia C McIlwaine: A question of place; Aida Slavic and Maria Inês Cordeiro: Core requirements for automation of analytico-synthetic classifications;
    B: Applications in Knowledge Representation 3 Barbara H Kwasnik and You-Lee Chun: Translation of classifications: issues and solutions as exemplified in the Korean Decimal Classification; Hur-Li Lee and Jennifer Clyde: Users' perspectives of the "Collection" and the online catalogue; Jens-Erik Mai: The role of documents, domains and decisions in indexing Session 6 A: Knowledge Organization of Universal and Special Systems 3 Stella G Dextre Clarke, Alan Gilchrist and Leonard Will: Revision and extension of thesaurus standards; Michèle Hudon: Conceptual compatibility in controlled language tools used to index and access the content of moving image collections; Antonio Garcia Jimdnez, Félix del Valle Gastaminza: From thesauri to ontologies: a case study in a digital visual context; Ali Asghar Shiri and Crawford Revie: End-user interaction with thesauri: an evaluation of cognitive overlap in search term selection; B: Special Applications Carol A Bean: Representation of medical knowledge for automated semantic interpretation of clinical reports; Chew-Hung Lee, Christopher Khoo and Jin-Cheon Na: Automatic identification of treatment relations for medical ontology learning: an exploratory study; A Neelameghan and M C Vasudevan: Integrating image files, case records of patients and Web resources: case study of a knowledge Base an tumours of the central nervous system; Nancy J Williamson: Complementary and alternative medicine: its place in the reorganized medical sciences in the Universal Decimal Classification; Session 7 A: Applications in Knowledge Representation 4 Claudio Gnoli: Naturalism vs pragmatism in knowledge organization; Wouter Schallier: On the razor's edge: between local and overall needs in knowledge organization; Danielle H Miller: User perception and the online catalogue: public library OPAC users "think aloud"; B: Knowledge Organization in Corporate Information Systems Anita S Coleman: Knowledge structures and the vocabulary of engineering novices; Evelyne Mounier and Céline Paganelli: The representation of knowledge contained in technical documents: the example of FAQs (frequently asked questions); Martin S van der Walt: A classification scheme for the organization of electronic documents in small, medium and micro enterprises (SMMEs); Session 8 A: Knowledge Organization of Non-print Information: Sound, Image, Multimedia Laura M Bartoto, Cathy S Lowe and Sharon C Glotzer: Information management of microstructures: non-print, multidisciplinary information in a materials science digital library; Pauline Rafferty and Rob Hidderley: A survey of Image trieval tools; Richard P Smiraglia: Knowledge sharing and content genealogy: extensiog the "works" model as a metaphor for non-documentary artefacts with case studies of Etruscan artefacts; B: Linguistic and Cultural Approaches to Knowledge Organization 2 Graciela Rosemblat, Tony Tse and Darren Gemoets: Adapting a monolingual consumer health system for Spanish cross-language information retrieval; Matjaz Zalokar: Preparation of a general controlled vocabulary in Slovene and English for the COBISS.SI library information system, Slovenia; Marianne Dabbadie, Widad Mustafa El Hadi and Francois Fraysse: Coaching applications: a new concept for usage testing an information systems. Testing usage an a corporate information system with K-Now; Session 9 Theories of Knowledge and Knowledge Organization Keiichi Kawamura: Ranganathan and after: Coates' practice and theory; Shiyan Ou, Christopher Khoo, Dion H Goh and Hui-Ying Heng: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization; Iolo Jones, Daniel Cunliffe, Douglas Tudhope: Natural language processing and knowledge organization systems as an aid to retrieval
    Footnote
    Rez. in: Mitt. VÖB 58(2005) H.1, S.78-81 (O. Oberhauser): "Die 1989 gegründete Internationale Gesellschaft für Wissensorganisation (ISKO) ist eine der wenigen Vereinigungen, deren Interessensschwerpunkt ganz auf wissenschaftliche und praktische Fragen der inhaltlichen Erschliessung und des sachlichen Informationszugangs ausgerichtet ist. Die deutschsprachige Sektion der ISKO hat ihren Sitz in Bonn; die Gesellschaft ist jedoch hierzulande nicht ausreichend bekannt und hat bislang nurwenige Mitglieder aus Österreich. Neben der nunmehr bereits seit über dreissig Jahren erscheinenden Fachzeitschrift Knowledge Organization (bis 1993 International Classification) publiziert die ISKO mehrere Buchserien, die früher im Frankfurter Indeks-Verlag erschienen und heute - wie auch die Zeitschrift - in Würzburg bei Ergon verlegt werden. Unter diesen nehmen die Tagungsbände der internationalen ISKO-Konferenzen, die seit 1990 alle zwei Jahre (an wechselnden Orten) abgehalten werden, eine bedeutende Stellung ein. Nun liegen die Proceedings der im Juli des vergangenen Jahres in London veranstalteten achten Konferenz vor, editiert in einheitlichem Layout, an dem mit Ausnahme der relativ kleinen Schrift, einem mitunter miss glückten Randausgleich bei den Titelüberschriften, unschönen (da fehlenden) Abständen bei den Überschriften von Subkapiteln sowie den üblichen vermeidbaren Tippfehlern (z.B. "trieval" anstelle von "retrieval" im Inhaltsverzeichnis, p. 9) wenig auszusetzen ist. Der trotz des kleinen Fonts stattlich wirkende Band versammelt immerhin 55 Vorträge, die, offenbar der Organisation der Tagung entsprechend, in 17 Abschnitte gegliedert sind. Die letzteren sind allerdings nur aus dem Inhaltsverzeichnis ersichtlich und entbehren jeden Kommentars, der sie auch inhaltlich hätte näher charakterisieren können. Die Herkunft der Autoren der Vorträge - darunter einige grosse und bekannte Namen - spiegelt die Internationalität der Vereinigung wider. Der deutsche Sprachraum ist allerdings nur durch einen einzigen Beitrag vertreten (H. Peter Ohly vom IZ Sozialwissenschaften, Bonn, über die Erschliessung einer Datenbank für Web-Ressourcen); bibliothekarische Autoren aus dem Raum "D-A-CH" sucht man vergebens. Die meisten Vorträge sind relativ kurz und bündig gehalten; die durchschnittliche Länge beträgt etwa vier bis sechs Seiten.
    Das Rahmenthema der Tagung kam aufgrund des vor und nach der ISKO-Konferenz abgehaltenen "UN World Summit an an Information Society" zustande. Im Titel des Buches ist die "globale Wissensgesellschaft" freilich eher irreführend, da keiner der darin abgedruckten Beiträge zentral davon handelt. Der eine der beiden Vorträge, die den Begriff selbst im Titel anführen, beschäftigt sich mit der Konstruktion einer Taxonomie für "cultural literacy" (O'Keefe), der andere mit sogenannten "naiven Klassifikationssystemen" (Beghtol), d.h. solchen, die im Gegensatz zu "professionellen" Systemen von Personen ohne spezifisches Interesse an klassifikatorischen Fragen entwickelt wurden. Beiträge mit "multi-kulti"-Charakter behandeln etwa Fragen wie - kulturübergreifende Arbeit, etwa beim EU-Filmarchiv-Projekt Collate (Albrechtsen et al.) oder einem Projekt zur Maori-Kultur (Liew); - Mehrsprachigkeit bzw. Übersetzung, z.B. der koreanischen Dezimalklassifikation (Kwasnik & Chun), eines auf der Sears ListofSubject Headings basierenden slowenischen Schlagwortvokabulars (Zalokar), einer spanisch-englischen Schlagwortliste für Gesundheitsfragen (Rosemblat et al.); - universelle Klassifikationssysteme wie die Dewey-Dezimalklassifikation (Joan Mitchell über die DDC 22, sowie zwei weitere Beiträge) und die Internationale Dezimalklassifikation (la McIlwaine über Geographika, Nancy Williamson über Alternativ- und Komplementärmedizin in der UDC). Unter den 55 Beiträgen finden sich folgende - aus der Sicht des Rezensenten - besonders interessante thematische "Cluster": - OPAC-orientierte Beiträge, etwa über die Anforderungen bei derAutomatisierung analytisch-synthetischer Klassifikationssysteme (Slavic & Cordeiro) sowie Beiträge zu Benutzerforschung und -verhalten (Lee & Clyde; Miller); - Erschliessung und Retrieval von visuellen bzw. multimedialen Ressourcen, insbesondere mit Ausrichtung auf Thesauri (Hudin; Garcia Jimenez & De Valle Gastaminza; Rafferty & Hidderley); - Thesaurus-Standards (Dextre Clark et al.), Thesauri und Endbenutzer (Shiri & Revie); - Automatisches Klassifizieren (Vizine-Goetz & Beall mit Bezug auf die DDC; Na et al. über methodische Ansätze bei der Klassifizierung von Produktbesprechungen nach positiven bzw. negativen Gefühlsäusserungen); - Beiträge über (hierzulande) weniger bekannte Systeme wie Facettenklassifikation einschliesslich der Bliss-Klassifikation sowie der Umsetzung der Ideen von Ranganathan durch E.J. Coates (vier Vorträge), die Sachs-Klassifikation (Sachs & Smiraglia) sowie M. S. van der Walts Schema zur Klassifizierung elektronischer Dokumente in Klein- und Mittelbetrieben. Auch die übrigen Beiträge sind mehrheitlich interessant geschrieben und zeugen vom fachlichen Qualitätsstandard der ISKO-Konferenzen. Der Band kann daher bibliothekarischen bzw. informationswissenschaftlichen Ausbildungseinrichtungen sowie Bibliotheken mit Sammelinteresse für Literatur zu Klassifikationsfragen ausdrücklich empfohlen werden. Ausserdem darf der nächsten (= neunten) internationalen ISKO-Konferenz, die 2006 in Wien abgehalten werden soll, mit Interesse entgegengesehen werden.
  20. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.03
    0.026315134 = product of:
      0.09210297 = sum of:
        0.057749767 = product of:
          0.14437442 = sum of:
            0.08879615 = weight(_text_:retrieval in 2134) [ClassicSimilarity], result of:
              0.08879615 = score(doc=2134,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.8104139 = fieldWeight in 2134, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
            0.055578265 = weight(_text_:system in 2134) [ClassicSimilarity], result of:
              0.055578265 = score(doc=2134,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.4871716 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.4 = coord(2/5)
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.0687064 = score(doc=2134,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    30. 3.2001 13:32:22
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval

Authors

Languages

Types

Themes

Subjects

Classifications