Search (56 results, page 1 of 3)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Rekabsaz, N. et al.: Toward optimized multimodal concept indexing (2016) 0.05
    0.050224297 = product of:
      0.10044859 = sum of:
        0.10044859 = product of:
          0.15067288 = sum of:
            0.09078693 = weight(_text_:n in 2751) [ClassicSimilarity], result of:
              0.09078693 = score(doc=2751,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.47637522 = fieldWeight in 2751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2751)
            0.059885964 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
              0.059885964 = score(doc=2751,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.38690117 = fieldWeight in 2751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2751)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  2. Tseng, Y.-H.: Solving vocabulary problems with interactive query expansion (1998) 0.04
    0.04024851 = product of:
      0.08049702 = sum of:
        0.08049702 = product of:
          0.120745525 = sum of:
            0.056549463 = weight(_text_:y in 5159) [ClassicSimilarity], result of:
              0.056549463 = score(doc=5159,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.26585007 = fieldWeight in 5159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5159)
            0.06419606 = weight(_text_:n in 5159) [ClassicSimilarity], result of:
              0.06419606 = score(doc=5159,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.33684817 = fieldWeight in 5159, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5159)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    One of the major causes of search failures in information retrieval systems is vocabulary mismatch. Presents a solution to the vocabulary problem through 2 strategies known as term suggestion (TS) and term relevance feedback (TRF). In TS, collection specific terms are extracted from the text collection. These terms and their frequencies constitute the keyword database for suggesting terms in response to users' queries. One effect of this term suggestion is that it functions as a dynamic directory if the query is a general term that contains broad meaning. In term relevance feedback, terms extracted from the top ranked documents retrieved from the previous query are shown to users for relevance feedback. In the experiment, interactive TS provides very high precision rates while achieving similar recall rates as n-gram matching. Local TRF achieves improvement in both precision and recall rate in a full text news database and degrades slightly in recall rate in bibliographic databases due to the very limited source of information for feedback. In terms of Rijsbergen's combined measure of recall and precision, both TS and TRF achieve better performance than n-gram matching, which implies that the greater improvement in precision rate compensates the slight degradation in recall rate for TS and TRF
  3. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.03
    0.033980977 = product of:
      0.06796195 = sum of:
        0.06796195 = product of:
          0.10194293 = sum of:
            0.056549463 = weight(_text_:y in 3366) [ClassicSimilarity], result of:
              0.056549463 = score(doc=3366,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.26585007 = fieldWeight in 3366, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3366)
            0.045393463 = weight(_text_:n in 3366) [ClassicSimilarity], result of:
              0.045393463 = score(doc=3366,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.23818761 = fieldWeight in 3366, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3366)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  4. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.03
    0.03013458 = product of:
      0.06026916 = sum of:
        0.06026916 = product of:
          0.090403736 = sum of:
            0.054472156 = weight(_text_:n in 2419) [ClassicSimilarity], result of:
              0.054472156 = score(doc=2419,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.28582513 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
            0.035931576 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.035931576 = score(doc=2419,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    16.11.2008 16:22:48
  5. Robertson, A.M.; Willett, P.: Applications of n-grams in textual information systems (1998) 0.03
    0.029650887 = product of:
      0.059301775 = sum of:
        0.059301775 = product of:
          0.17790532 = sum of:
            0.17790532 = weight(_text_:n in 4715) [ClassicSimilarity], result of:
              0.17790532 = score(doc=4715,freq=12.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.933501 = fieldWeight in 4715, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4715)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Provides an introduction to the use of n-grams in textual information systems, where an n-gram is a string of n, usually adjacent, characters, extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n-grams include spelling errors detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look up, text compression, and language identification
    Object
    n-grams
  6. Brandão, W.C.; Santos, R.L.T.; Ziviani, N.; Moura, E.S. de; Silva, A.S. da: Learning to expand queries using entities (2014) 0.03
    0.025112148 = product of:
      0.050224297 = sum of:
        0.050224297 = product of:
          0.07533644 = sum of:
            0.045393463 = weight(_text_:n in 1343) [ClassicSimilarity], result of:
              0.045393463 = score(doc=1343,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.23818761 = fieldWeight in 1343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1343)
            0.029942982 = weight(_text_:22 in 1343) [ClassicSimilarity], result of:
              0.029942982 = score(doc=1343,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.19345059 = fieldWeight in 1343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1343)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:07:50
  7. Qiu, Y.; Frei, H.P.: Concept based query expansion (1993) 0.02
    0.022619788 = product of:
      0.045239575 = sum of:
        0.045239575 = product of:
          0.13571872 = sum of:
            0.13571872 = weight(_text_:y in 2678) [ClassicSimilarity], result of:
              0.13571872 = score(doc=2678,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.6380402 = fieldWeight in 2678, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2678)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020) 0.02
    0.01958931 = product of:
      0.03917862 = sum of:
        0.03917862 = product of:
          0.11753586 = sum of:
            0.11753586 = weight(_text_:y in 5871) [ClassicSimilarity], result of:
              0.11753586 = score(doc=5871,freq=6.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.552559 = fieldWeight in 5871, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5871)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Stojanovic, N.: On the query refinement in the ontology-based searching for information (2005) 0.02
    0.015131155 = product of:
      0.03026231 = sum of:
        0.03026231 = product of:
          0.09078693 = sum of:
            0.09078693 = weight(_text_:n in 2907) [ClassicSimilarity], result of:
              0.09078693 = score(doc=2907,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.47637522 = fieldWeight in 2907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2907)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.01
    0.014115257 = product of:
      0.028230514 = sum of:
        0.028230514 = product of:
          0.08469154 = sum of:
            0.08469154 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.08469154 = score(doc=1352,freq=4.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
  11. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.013973392 = product of:
      0.027946783 = sum of:
        0.027946783 = product of:
          0.08384035 = sum of:
            0.08384035 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.08384035 = score(doc=2134,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
  12. Jiang, Y.; Zhang, X.; Tang, Y.; Nie, R.: Feature-based approaches to semantic similarity assessment of concepts using Wikipedia (2015) 0.01
    0.013328837 = product of:
      0.026657674 = sum of:
        0.026657674 = product of:
          0.07997302 = sum of:
            0.07997302 = weight(_text_:y in 2682) [ClassicSimilarity], result of:
              0.07997302 = score(doc=2682,freq=4.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.37596878 = fieldWeight in 2682, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2682)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Qu, R.; Fang, Y.; Bai, W.; Jiang, Y.: Computing semantic similarity based on novel models of semantic representation using Wikipedia (2018) 0.01
    0.013328837 = product of:
      0.026657674 = sum of:
        0.026657674 = product of:
          0.07997302 = sum of:
            0.07997302 = weight(_text_:y in 5052) [ClassicSimilarity], result of:
              0.07997302 = score(doc=5052,freq=4.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.37596878 = fieldWeight in 5052, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5052)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Bettencourt, N.; Silva, N.; Barroso, J.: Semantically enhancing recommender systems (2016) 0.01
    0.012839211 = product of:
      0.025678422 = sum of:
        0.025678422 = product of:
          0.07703526 = sum of:
            0.07703526 = weight(_text_:n in 3374) [ClassicSimilarity], result of:
              0.07703526 = score(doc=3374,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.40421778 = fieldWeight in 3374, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3374)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 5704) [ClassicSimilarity], result of:
              0.06785936 = score(doc=5704,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 5704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5704)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Nie, J.-Y.: Query expansion and query translation as logical inference (2003) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 1425) [ClassicSimilarity], result of:
              0.06785936 = score(doc=1425,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1425)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Nakashima, M.; Sato, K.; Qu, Y.; Ito, T.: Browsing-based conceptual information retrieval incorporating dictionary term relations, keyword associations, and a user's interest (2003) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 5147) [ClassicSimilarity], result of:
              0.06785936 = score(doc=5147,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 5147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5147)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 540) [ClassicSimilarity], result of:
              0.06785936 = score(doc=540,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=540)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Morato, J.; Llorens, J.; Genova, G.; Moreiro, J.A.: Experiments in discourse analysis impact on information classification and retrieval algorithms (2003) 0.01
    0.010699343 = product of:
      0.021398686 = sum of:
        0.021398686 = product of:
          0.06419606 = sum of:
            0.06419606 = weight(_text_:n in 1083) [ClassicSimilarity], result of:
              0.06419606 = score(doc=1083,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.33684817 = fieldWeight in 1083, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1083)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Researchers in indexing and retrieval systems have been advocating the inclusion of more contextual information to improve results. The proliferation of full-text databases and advances in computer storage capacity have made it possible to carry out text analysis by means of linguistic and extra-linguistic knowledge. Since the mid 80s, research has tended to pay more attention to context, giving discourse analysis a more central role. The research presented in this paper aims to check whether discourse variables have an impact on modern information retrieval and classification algorithms. In order to evaluate this hypothesis, a functional framework for information analysis in an automated environment has been proposed, where the n-grams (filtering) and the k-means and Chen's classification algorithms have been tested against sub-collections of documents based on the following discourse variables: "Genre", "Register", "Domain terminology", and "Document structure". The results obtained with the algorithms for the different sub-collections were compared to the MeSH information structure. These demonstrate that n-grams does not appear to have a clear dependence on discourse variables, though the k-means classification algorithm does, but only on domain terminology and document structure, and finally Chen's algorithm has a clear dependence on all of the discourse variables. This information could be used to design better classification algorithms, where discourse variables should be taken into account. Other minor conclusions drawn from these results are also presented.
  20. Bayer, O.; Höhfeld, S.; Josbächer, F.; Kimm, N.; Kradepohl, I.; Kwiatkowski, M.; Puschmann, C.; Sabbagh, M.; Werner, N.; Vollmer, U.: Evaluation of an ontology-based knowledge-management-system : a case study of Convera RetrievalWare 8.0 (2005) 0.01
    0.010699343 = product of:
      0.021398686 = sum of:
        0.021398686 = product of:
          0.06419606 = sum of:
            0.06419606 = weight(_text_:n in 624) [ClassicSimilarity], result of:
              0.06419606 = score(doc=624,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.33684817 = fieldWeight in 624, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=624)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Languages

  • e 49
  • d 6
  • chi 1
  • More… Less…

Types

  • a 50
  • el 7
  • m 2
  • x 1
  • More… Less…