Search (81 results, page 1 of 5)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.104134515 = sum of:
      0.08291535 = product of:
        0.24874605 = sum of:
          0.24874605 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24874605 = score(doc=562,freq=2.0), product of:
              0.44259444 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052204985 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021219164 = product of:
        0.04243833 = sum of:
          0.04243833 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04243833 = score(doc=562,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.06
    0.06175369 = product of:
      0.12350738 = sum of:
        0.12350738 = sum of:
          0.052776836 = weight(_text_:retrieval in 611) [ClassicSimilarity], result of:
            0.052776836 = score(doc=611,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33420905 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
          0.070730545 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
            0.070730545 = score(doc=611,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.38690117 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=611)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
    Theme
    Klassifikationssysteme im Online-Retrieval
  3. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.05
    0.047185786 = product of:
      0.09437157 = sum of:
        0.09437157 = sum of:
          0.059006296 = weight(_text_:retrieval in 2765) [ClassicSimilarity], result of:
            0.059006296 = score(doc=2765,freq=10.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.37365708 = fieldWeight in 2765, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.035365272 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.035365272 = score(doc=2765,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.5 = coord(1/2)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
  4. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.04
    0.043227583 = product of:
      0.08645517 = sum of:
        0.08645517 = sum of:
          0.036943786 = weight(_text_:retrieval in 1673) [ClassicSimilarity], result of:
            0.036943786 = score(doc=1673,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.23394634 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
          0.04951138 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
            0.04951138 = score(doc=1673,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.2708308 = fieldWeight in 1673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1673)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
    Theme
    Klassifikationssysteme im Online-Retrieval
  5. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.030876845 = product of:
      0.06175369 = sum of:
        0.06175369 = sum of:
          0.026388418 = weight(_text_:retrieval in 1107) [ClassicSimilarity], result of:
            0.026388418 = score(doc=1107,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.16710453 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.035365272 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.035365272 = score(doc=1107,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  6. Schiminovich, S.: Automatic classification and retrieval of documents by means of a bibliographic pattern discovery algorithm (1971) 0.03
    0.0261232 = product of:
      0.0522464 = sum of:
        0.0522464 = product of:
          0.1044928 = sum of:
            0.1044928 = weight(_text_:retrieval in 4846) [ClassicSimilarity], result of:
              0.1044928 = score(doc=4846,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.6617001 = fieldWeight in 4846, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information storage and retrieval. 6(1971), S.417-435
  7. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.024701476 = product of:
      0.049402952 = sum of:
        0.049402952 = sum of:
          0.021110734 = weight(_text_:retrieval in 3284) [ClassicSimilarity], result of:
            0.021110734 = score(doc=3284,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.13368362 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.028292218 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.028292218 = score(doc=3284,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.5 = coord(1/2)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  8. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.02
    0.022391316 = product of:
      0.04478263 = sum of:
        0.04478263 = product of:
          0.08956526 = sum of:
            0.08956526 = weight(_text_:retrieval in 32) [ClassicSimilarity], result of:
              0.08956526 = score(doc=32,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.5671716 = fieldWeight in 32, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.021219164 = product of:
      0.04243833 = sum of:
        0.04243833 = product of:
          0.08487666 = sum of:
            0.08487666 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.08487666 = score(doc=1046,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  10. Rijsbergen, C.J. van: Automatic classification in information retrieval (1978) 0.02
    0.021110734 = product of:
      0.042221468 = sum of:
        0.042221468 = product of:
          0.084442936 = sum of:
            0.084442936 = weight(_text_:retrieval in 2412) [ClassicSimilarity], result of:
              0.084442936 = score(doc=2412,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.5347345 = fieldWeight in 2412, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=2412)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Wätjen, H.-J.; Diekmann, B.; Möller, G.; Carstensen, K.-U.: Bericht zum DFG-Projekt: GERHARD : German Harvest Automated Retrieval and Directory (1998) 0.02
    0.018659428 = product of:
      0.037318856 = sum of:
        0.037318856 = product of:
          0.07463771 = sum of:
            0.07463771 = weight(_text_:retrieval in 3065) [ClassicSimilarity], result of:
              0.07463771 = score(doc=3065,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.47264296 = fieldWeight in 3065, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3065)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  12. Wu, M.; Fuller, M.; Wilkinson, R.: Using clustering and classification approaches in interactive retrieval (2001) 0.02
    0.018471893 = product of:
      0.036943786 = sum of:
        0.036943786 = product of:
          0.07388757 = sum of:
            0.07388757 = weight(_text_:retrieval in 2666) [ClassicSimilarity], result of:
              0.07388757 = score(doc=2666,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.46789268 = fieldWeight in 2666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.02
    0.018282432 = product of:
      0.036564864 = sum of:
        0.036564864 = product of:
          0.07312973 = sum of:
            0.07312973 = weight(_text_:retrieval in 2322) [ClassicSimilarity], result of:
              0.07312973 = score(doc=2322,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.46309367 = fieldWeight in 2322, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  14. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017682636 = product of:
      0.035365272 = sum of:
        0.035365272 = product of:
          0.070730545 = sum of:
            0.070730545 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.070730545 = score(doc=2748,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  15. GERHARD : eine Spezialsuchmaschine für die Wissenschaft (1998) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 381) [ClassicSimilarity], result of:
              0.0633322 = score(doc=381,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=381)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  16. Yu, W.; Gong, Y.: Document clustering by concept factorization (2004) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 4084) [ClassicSimilarity], result of:
              0.0633322 = score(doc=4084,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  17. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 2339) [ClassicSimilarity], result of:
              0.0633322 = score(doc=2339,freq=8.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 2339, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text classification (TC) is a core technique for text mining and information retrieval. It has been applied to many applications in many different research and industrial areas. Term-weighting schemes assign an appropriate weight to each term to obtain a high TC performance. Although term weighting is one of the important modules for TC and TC has different peculiarities from those in information retrieval, many term-weighting schemes used in information retrieval, such as term frequency-inverse document frequency (tf-idf), have been used in TC in the same manner. The peculiarity of TC that differs most from information retrieval is the existence of class information. This article proposes a new term-weighting scheme that uses class information using positive and negative class distributions. As a result, the proposed scheme, log tf-TRR, consistently performs better than do other schemes using class information as well as traditional schemes such as tf-idf.
  18. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.01
    0.014927543 = product of:
      0.029855086 = sum of:
        0.029855086 = product of:
          0.05971017 = sum of:
            0.05971017 = weight(_text_:retrieval in 7695) [ClassicSimilarity], result of:
              0.05971017 = score(doc=7695,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.37811437 = fieldWeight in 7695, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  19. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.01
    0.014927543 = product of:
      0.029855086 = sum of:
        0.029855086 = product of:
          0.05971017 = sum of:
            0.05971017 = weight(_text_:retrieval in 2564) [ClassicSimilarity], result of:
              0.05971017 = score(doc=2564,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.37811437 = fieldWeight in 2564, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
  20. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.01
    0.013963438 = product of:
      0.027926875 = sum of:
        0.027926875 = product of:
          0.05585375 = sum of:
            0.05585375 = weight(_text_:retrieval in 1669) [ClassicSimilarity], result of:
              0.05585375 = score(doc=1669,freq=14.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.3536936 = fieldWeight in 1669, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.

Years

Languages

  • e 61
  • d 19
  • chi 1
  • More… Less…

Types

  • a 67
  • el 15
  • m 2
  • r 2
  • x 2
  • d 1
  • More… Less…