Search (52 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06535886 = sum of:
      0.05324643 = product of:
        0.21298572 = sum of:
          0.21298572 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21298572 = score(doc=562,freq=2.0), product of:
              0.378966 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04469987 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.012112431 = product of:
        0.036337294 = sum of:
          0.036337294 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036337294 = score(doc=562,freq=2.0), product of:
              0.15653133 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04469987 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Choi, B.; Peng, X.: Dynamic and hierarchical classification of Web pages (2004) 0.03
    0.030011028 = product of:
      0.060022056 = sum of:
        0.060022056 = product of:
          0.090033084 = sum of:
            0.052837145 = weight(_text_:x in 2555) [ClassicSimilarity], result of:
              0.052837145 = score(doc=2555,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.27992693 = fieldWeight in 2555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2555)
            0.03719594 = weight(_text_:b in 2555) [ClassicSimilarity], result of:
              0.03719594 = score(doc=2555,freq=2.0), product of:
                0.15836994 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.04469987 = queryNorm
                0.23486741 = fieldWeight in 2555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2555)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  3. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.024606481 = product of:
      0.049212962 = sum of:
        0.049212962 = product of:
          0.07381944 = sum of:
            0.049594585 = weight(_text_:b in 3284) [ClassicSimilarity], result of:
              0.049594585 = score(doc=3284,freq=8.0), product of:
                0.15836994 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.04469987 = queryNorm
                0.31315655 = fieldWeight in 3284, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
            0.024224862 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.024224862 = score(doc=3284,freq=2.0), product of:
                0.15653133 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04469987 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Date
    22. 1.2010 14:41:24
  4. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.020425899 = product of:
      0.040851798 = sum of:
        0.040851798 = product of:
          0.061277695 = sum of:
            0.030996617 = weight(_text_:b in 1107) [ClassicSimilarity], result of:
              0.030996617 = score(doc=1107,freq=2.0), product of:
                0.15836994 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.04469987 = queryNorm
                0.19572285 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
            0.030281078 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.030281078 = score(doc=1107,freq=2.0), product of:
                0.15653133 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04469987 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  5. Chan, L.M.; Lin, X.; Zeng, M.L.: Structural and multilingual approaches to subject access on the Web (2000) 0.02
    0.017612383 = product of:
      0.035224766 = sum of:
        0.035224766 = product of:
          0.10567429 = sum of:
            0.10567429 = weight(_text_:x in 507) [ClassicSimilarity], result of:
              0.10567429 = score(doc=507,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.55985385 = fieldWeight in 507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.09375 = fieldNorm(doc=507)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Zhang, X: Rough set theory based automatic text categorization (2005) 0.02
    0.016605115 = product of:
      0.03321023 = sum of:
        0.03321023 = product of:
          0.09963068 = sum of:
            0.09963068 = weight(_text_:x in 2822) [ClassicSimilarity], result of:
              0.09963068 = score(doc=2822,freq=4.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.5278353 = fieldWeight in 2822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Isbn
    3-8206-0149-X
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.012112431 = product of:
      0.024224862 = sum of:
        0.024224862 = product of:
          0.07267459 = sum of:
            0.07267459 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07267459 = score(doc=1046,freq=2.0), product of:
                0.15653133 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04469987 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  8. Chan, L.M.; Lin, X.; Zeng, M.: Structural and multilingual approaches to subject access on the Web (1999) 0.01
    0.011741589 = product of:
      0.023483178 = sum of:
        0.023483178 = product of:
          0.07044953 = sum of:
            0.07044953 = weight(_text_:x in 162) [ClassicSimilarity], result of:
              0.07044953 = score(doc=162,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.3732359 = fieldWeight in 162, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0625 = fieldNorm(doc=162)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Jersek, T.: Automatische DDC-Klassifizierung mit Lingo : Vorgehensweise und Ergebnisse (2012) 0.01
    0.011741589 = product of:
      0.023483178 = sum of:
        0.023483178 = product of:
          0.07044953 = sum of:
            0.07044953 = weight(_text_:x in 122) [ClassicSimilarity], result of:
              0.07044953 = score(doc=122,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.3732359 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    x
  10. Chung, Y.M.; Lee, J.Y.: ¬A corpus-based approach to comparative evaluation of statistical term association measures (2001) 0.01
    0.010378197 = product of:
      0.020756394 = sum of:
        0.020756394 = product of:
          0.062269177 = sum of:
            0.062269177 = weight(_text_:x in 5769) [ClassicSimilarity], result of:
              0.062269177 = score(doc=5769,freq=4.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.32989708 = fieldWeight in 5769, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5769)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Statistical association measures have been widely applied in information retrieval research, usually employing a clustering of documents or terms on the basis of their relationships. Applications of the association measures for term clustering include automatic thesaurus construction and query expansion. This research evaluates the similarity of six association measures by comparing the relationship and behavior they demonstrate in various analyses of a test corpus. Analysis techniques include comparisons of highly ranked term pairs and term clusters, analyses of the correlation among the association measures using Pearson's correlation coefficient and MDS mapping, and an analysis of the impact of a term frequency on the association values by means of z-score. The major findings of the study are as follows: First, the most similar association measures are mutual information and Yule's coefficient of colligation Y, whereas cosine and Jaccard coefficients, as well as X**2 statistic and likelihood ratio, demonstrate quite similar behavior for terms with high frequency. Second, among all the measures, the X**2 statistic is the least affected by the frequency of terms. Third, although cosine and Jaccard coefficients tend to emphasize high frequency terms, mutual information and Yule's Y seem to overestimate rare terms
  11. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.01
    0.010332206 = product of:
      0.020664413 = sum of:
        0.020664413 = product of:
          0.061993234 = sum of:
            0.061993234 = weight(_text_:b in 3065) [ClassicSimilarity], result of:
              0.061993234 = score(doc=3065,freq=2.0), product of:
                0.15836994 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.04469987 = queryNorm
                0.3914457 = fieldWeight in 3065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.01
    0.010332206 = product of:
      0.020664413 = sum of:
        0.020664413 = product of:
          0.061993234 = sum of:
            0.061993234 = weight(_text_:b in 4132) [ClassicSimilarity], result of:
              0.061993234 = score(doc=4132,freq=2.0), product of:
                0.15836994 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.04469987 = queryNorm
                0.3914457 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Walther, R.: Möglichkeiten und Grenzen automatischer Klassifikationen von Web-Dokumenten (2001) 0.01
    0.01027389 = product of:
      0.02054778 = sum of:
        0.02054778 = product of:
          0.061643336 = sum of:
            0.061643336 = weight(_text_:x in 1562) [ClassicSimilarity], result of:
              0.061643336 = score(doc=1562,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.32658142 = fieldWeight in 1562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1562)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    x
  14. Yang, Y.; Liu, X.: ¬A re-examination of text categorization methods (1999) 0.01
    0.01027389 = product of:
      0.02054778 = sum of:
        0.02054778 = product of:
          0.061643336 = sum of:
            0.061643336 = weight(_text_:x in 3386) [ClassicSimilarity], result of:
              0.061643336 = score(doc=3386,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.32658142 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Wille, J.: Automatisches Klassifizieren bibliographischer Beschreibungsdaten : Vorgehensweise und Ergebnisse (2006) 0.01
    0.01027389 = product of:
      0.02054778 = sum of:
        0.02054778 = product of:
          0.061643336 = sum of:
            0.061643336 = weight(_text_:x in 6090) [ClassicSimilarity], result of:
              0.061643336 = score(doc=6090,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.32658142 = fieldWeight in 6090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6090)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    x
  16. Hu, G.; Zhou, S.; Guan, J.; Hu, X.: Towards effective document clustering : a constrained K-means based approach (2008) 0.01
    0.01027389 = product of:
      0.02054778 = sum of:
        0.02054778 = product of:
          0.061643336 = sum of:
            0.061643336 = weight(_text_:x in 2113) [ClassicSimilarity], result of:
              0.061643336 = score(doc=2113,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.32658142 = fieldWeight in 2113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2113)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.010093693 = product of:
      0.020187385 = sum of:
        0.020187385 = product of:
          0.060562156 = sum of:
            0.060562156 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.060562156 = score(doc=611,freq=2.0), product of:
                0.15653133 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04469987 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  18. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.010093693 = product of:
      0.020187385 = sum of:
        0.020187385 = product of:
          0.060562156 = sum of:
            0.060562156 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.060562156 = score(doc=2748,freq=2.0), product of:
                0.15653133 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04469987 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  19. Pfeffer, M.: Automatische Vergabe von RVK-Notationen anhand von bibliografischen Daten mittels fallbasiertem Schließen (2007) 0.01
    0.008806191 = product of:
      0.017612383 = sum of:
        0.017612383 = product of:
          0.052837145 = sum of:
            0.052837145 = weight(_text_:x in 558) [ClassicSimilarity], result of:
              0.052837145 = score(doc=558,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.27992693 = fieldWeight in 558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=558)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    x
  20. Hagedorn, K.; Chapman, S.; Newman, D.: Enhancing search and browse using automated clustering of subject metadata (2007) 0.01
    0.008806191 = product of:
      0.017612383 = sum of:
        0.017612383 = product of:
          0.052837145 = sum of:
            0.052837145 = weight(_text_:x in 1168) [ClassicSimilarity], result of:
              0.052837145 = score(doc=1168,freq=2.0), product of:
                0.18875335 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.04469987 = queryNorm
                0.27992693 = fieldWeight in 1168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1168)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    D-Lib magazine. 13(2007) nos.7/8, x S

Years

Languages

  • e 36
  • d 16

Types

  • a 34
  • el 9
  • x 9
  • m 2
  • r 2
  • More… Less…