Search (76 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Rijsbergen, C.J. van: Automatic classification in information retrieval (1978) 0.06
    0.060381293 = product of:
      0.24152517 = sum of:
        0.24152517 = weight(_text_:van in 2412) [ClassicSimilarity], result of:
          0.24152517 = score(doc=2412,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.98581105 = fieldWeight in 2412, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.125 = fieldNorm(doc=2412)
      0.25 = coord(1/4)
    
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.04
    0.03509568 = product of:
      0.07019136 = sum of:
        0.05233404 = product of:
          0.20933616 = sum of:
            0.20933616 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20933616 = score(doc=562,freq=2.0), product of:
                0.37247232 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043933928 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.01785732 = product of:
          0.03571464 = sum of:
            0.03571464 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03571464 = score(doc=562,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.03
    0.034967713 = product of:
      0.069935426 = sum of:
        0.04620441 = weight(_text_:c in 5976) [ClassicSimilarity], result of:
          0.04620441 = score(doc=5976,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
        0.023731016 = product of:
          0.04746203 = sum of:
            0.04746203 = weight(_text_:der in 5976) [ClassicSimilarity], result of:
              0.04746203 = score(doc=5976,freq=12.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4836247 = fieldWeight in 5976, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5976)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In diesem Artikel, der im Anwendungsbereich der Patentrecherche und Patentinformation angesiedelt ist, wird das automatische Gruppieren von Patentdokumenten - das so genannte Clustering - als ein Werkzeug zur Aufbereitung der Ergebnismenge einer Datenbankanfrage untersucht. Der Schwerpunkt liegt dabei auf der Evaluierung von drei Clustering-Verfahren mittels Nutzerbewertungen.
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  4. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.03
    0.0306312 = product of:
      0.0612624 = sum of:
        0.04042886 = weight(_text_:c in 1673) [ClassicSimilarity], result of:
          0.04042886 = score(doc=1673,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.02083354 = product of:
          0.04166708 = sum of:
            0.04166708 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04166708 = score(doc=1673,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 8.1996 22:08:06
  5. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.03
    0.0306312 = product of:
      0.0612624 = sum of:
        0.04042886 = weight(_text_:c in 5273) [ClassicSimilarity], result of:
          0.04042886 = score(doc=5273,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 5273, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.02083354 = product of:
          0.04166708 = sum of:
            0.04166708 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04166708 = score(doc=5273,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 16:24:52
  6. Krüger, C.: Evaluation des WWW-Suchdienstes GERHARD unter besonderer Beachtung automatischer Indexierung (1999) 0.02
    0.021208676 = product of:
      0.04241735 = sum of:
        0.028877756 = weight(_text_:c in 1777) [ClassicSimilarity], result of:
          0.028877756 = score(doc=1777,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.1905545 = fieldWeight in 1777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.013539596 = product of:
          0.027079193 = sum of:
            0.027079193 = weight(_text_:der in 1777) [ClassicSimilarity], result of:
              0.027079193 = score(doc=1777,freq=10.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.27592933 = fieldWeight in 1777, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1777)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die vorliegende Arbeit beinhaltet eine Beschreibung und Evaluation des WWW - Suchdienstes GERHARD (German Harvest Automated Retrieval and Directory). GERHARD ist ein Such- und Navigationssystem für das deutsche World Wide Web, weiches ausschließlich wissenschaftlich relevante Dokumente sammelt, und diese auf der Basis computerlinguistischer und statistischer Methoden automatisch mit Hilfe eines bibliothekarischen Klassifikationssystems klassifiziert. Mit dem DFG - Projekt GERHARD ist der Versuch unternommen worden, mit einem auf einem automatischen Klassifizierungsverfahren basierenden World Wide Web - Dienst eine Alternative zu herkömmlichen Methoden der Interneterschließung zu entwickeln. GERHARD ist im deutschsprachigen Raum das einzige Verzeichnis von Internetressourcen, dessen Erstellung und Aktualisierung vollständig automatisch (also maschinell) erfolgt. GERHARD beschränkt sich dabei auf den Nachweis von Dokumenten auf wissenschaftlichen WWW - Servern. Die Grundidee dabei war, kostenintensive intellektuelle Erschließung und Klassifizierung von lnternetseiten durch computerlinguistische und statistische Methoden zu ersetzen, um auf diese Weise die nachgewiesenen Internetressourcen automatisch auf das Vokabular eines bibliothekarischen Klassifikationssystems abzubilden. GERHARD steht für German Harvest Automated Retrieval and Directory. Die WWW - Adresse (URL) von GERHARD lautet: http://www.gerhard.de. Im Rahmen der vorliegenden Diplomarbeit soll eine Beschreibung des Dienstes mit besonderem Schwerpunkt auf dem zugrundeliegenden Indexierungs- bzw. Klassifizierungssystem erfolgen und anschließend mit Hilfe eines kleinen Retrievaltests die Effektivität von GERHARD überprüft werden.
    Footnote
    Diplomarbeit im Fach Inhaltliche Erschließung, Studiengang Informationsmanagement der FH Stuttgart - Hochschule für Bibliotheks- und Informationswesen
  7. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.018893898 = product of:
      0.07557559 = sum of:
        0.07557559 = sum of:
          0.033908512 = weight(_text_:der in 141) [ClassicSimilarity], result of:
            0.033908512 = score(doc=141,freq=8.0), product of:
              0.098138146 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043933928 = queryNorm
              0.34551817 = fieldWeight in 141, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
          0.04166708 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
            0.04166708 = score(doc=141,freq=2.0), product of:
              0.15384912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043933928 = queryNorm
              0.2708308 = fieldWeight in 141, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=141)
      0.25 = coord(1/4)
    
    Abstract
    Aufgabe der Datenanalyse ist es, Daten zu ordnen, übersichtlich darzustellen, verborgene und natürlich Strukturen zu entdecken, die diesbezüglich wesentlichen Eigenschaften herauszukristallisieren und zweckmäßige Modelle zur Beschreibung von Daten aufzustellen. Es wird ein Einblick in die Methoden und Prinzipien der Datenanalyse vermittelt. Anhand typischer Beispiele wird gezeigt, welche Daten analysiert, welche Strukturen betrachtet, welche Darstellungs- bzw. Ordnungsmethoden verwendet, welche Zielsetzungen verfolgt und welche Bewertungskriterien dabei angewendet werden können. Diskutiert wird auch die angemessene Verwendung der unterschiedlichen Methoden, wobei auf die gefahr und Art von Fehlinterpretationen hingewiesen wird
    Pages
    S.1-22
    Source
    Klassifikation und Ordnung. Tagungsband 12. Jahrestagung der Gesellschaft für Klassifikation, Darmstadt 17.-19.3.1988. Hrsg.: R. Wille
  8. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.02
    0.01782779 = product of:
      0.07131116 = sum of:
        0.07131116 = sum of:
          0.035596523 = weight(_text_:der in 3051) [ClassicSimilarity], result of:
            0.035596523 = score(doc=3051,freq=12.0), product of:
              0.098138146 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043933928 = queryNorm
              0.36271852 = fieldWeight in 3051, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.046875 = fieldNorm(doc=3051)
          0.03571464 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
            0.03571464 = score(doc=3051,freq=2.0), product of:
              0.15384912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043933928 = queryNorm
              0.23214069 = fieldWeight in 3051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3051)
      0.25 = coord(1/4)
    
    Abstract
    Klassifikation von bibliografischen Einheiten ist für einen systematischen Zugang zu den Beständen einer Bibliothek und deren Aufstellung unumgänglich. Bislang wurde diese Aufgabe von Fachexperten manuell erledigt, sei es individuell nach einer selbst entwickelten Systematik oder kooperativ nach einer gemeinsamen Systematik. In dieser Arbeit wird ein Verfahren zur Automatisierung des Klassifikationsvorgangs vorgestellt. Dabei kommt das Verfahren des fallbasierten Schließens zum Einsatz, das im Kontext der Forschung zur künstlichen Intelligenz entwickelt wurde. Das Verfahren liefert für jedes Werk, für das bibliografische Daten vorliegen, eine oder mehrere mögliche Klassifikationen. In Experimenten werden die Ergebnisse der automatischen Klassifikation mit der durch Fachexperten verglichen. Diese Experimente belegen die hohe Qualität der automatischen Klassifikation und dass das Verfahren geeignet ist, Fachexperten bei der Klassifikationsarbeit signifikant zu entlasten. Auch die nahezu vollständige Resystematisierung eines Bibliothekskataloges ist - mit gewissen Abstrichen - möglich.
    Date
    22. 8.2009 19:51:28
    Source
    Wissen bewegen - Bibliotheken in der Informationsgesellschaft / 97. Deutscher Bibliothekartag in Mannheim, 2008. Hrsg. von Ulrich Hohoff und Per Knudsen. Bearb. von Stefan Siebert
  9. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.015014872 = product of:
      0.060059488 = sum of:
        0.060059488 = sum of:
          0.036249723 = weight(_text_:der in 3284) [ClassicSimilarity], result of:
            0.036249723 = score(doc=3284,freq=28.0), product of:
              0.098138146 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043933928 = queryNorm
              0.36937445 = fieldWeight in 3284, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.023809763 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.023809763 = score(doc=3284,freq=2.0), product of:
              0.15384912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043933928 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.25 = coord(1/4)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  10. Miyamoto, S.: Information clustering based an fuzzy multisets (2003) 0.01
    0.014293761 = product of:
      0.057175044 = sum of:
        0.057175044 = weight(_text_:c in 1071) [ClassicSimilarity], result of:
          0.057175044 = score(doc=1071,freq=4.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3772787 = fieldWeight in 1071, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1071)
      0.25 = coord(1/4)
    
    Abstract
    A fuzzy multiset model for information clustering is proposed with application to information retrieval on the World Wide Web. Noting that a search engine retrieves multiple occurrences of the same subjects with possibly different degrees of relevance, we observe that fuzzy multisets provide an appropriate model of information retrieval on the WWW. Information clustering which means both term clustering and document clustering is considered. Three methods of the hard c-means, fuzzy c-means, and an agglomerative method using cluster centers are proposed. Two distances between fuzzy multisets and algorithms for calculating cluster centers are defined. Theoretical properties concerning the clustering algorithms are studied. Illustrative examples are given to show how the algorithms work.
  11. Fagni, T.; Sebastiani, F.: Selecting negative examples for hierarchical text classification: An experimental comparison (2010) 0.01
    0.012504436 = product of:
      0.050017744 = sum of:
        0.050017744 = weight(_text_:c in 4101) [ClassicSimilarity], result of:
          0.050017744 = score(doc=4101,freq=6.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3300501 = fieldWeight in 4101, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4101)
      0.25 = coord(1/4)
    
    Abstract
    Hierarchical text classification (HTC) approaches have recently attracted a lot of interest on the part of researchers in human language technology and machine learning, since they have been shown to bring about equal, if not better, classification accuracy with respect to their "flat" counterparts while allowing exponential time savings at both learning and classification time. A typical component of HTC methods is a "local" policy for selecting negative examples: Given a category c, its negative training examples are by default identified with the training examples that are negative for c and positive for the categories which are siblings of c in the hierarchy. However, this policy has always been taken for granted and never been subjected to careful scrutiny since first proposed 15 years ago. This article proposes a thorough experimental comparison between this policy and three other policies for the selection of negative examples in HTC contexts, one of which (BEST LOCAL (k)) is being proposed for the first time in this article. We compare these policies on the hierarchical versions of three supervised learning algorithms (boosting, support vector machines, and naïve Bayes) by performing experiments on two standard TC datasets, REUTERS-21578 and RCV1-V2.
  12. Godby, C. J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization (2001) 0.01
    0.011551103 = product of:
      0.04620441 = sum of:
        0.04620441 = weight(_text_:c in 1567) [ClassicSimilarity], result of:
          0.04620441 = score(doc=1567,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 1567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=1567)
      0.25 = coord(1/4)
    
  13. Na, J.-C.; Sui, H.; Khoo, C.; Chan, S.; Zhou, Y.: Effectiveness of simple linguistic processing in automatic sentiment classification of product reviews (2004) 0.01
    0.01020983 = product of:
      0.04083932 = sum of:
        0.04083932 = weight(_text_:c in 2624) [ClassicSimilarity], result of:
          0.04083932 = score(doc=2624,freq=4.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2694848 = fieldWeight in 2624, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
      0.25 = coord(1/4)
    
  14. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.01
    0.01020983 = product of:
      0.04083932 = sum of:
        0.04083932 = weight(_text_:c in 237) [ClassicSimilarity], result of:
          0.04083932 = score(doc=237,freq=4.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2694848 = fieldWeight in 237, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.25 = coord(1/4)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
  15. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.01
    0.010107215 = product of:
      0.04042886 = sum of:
        0.04042886 = weight(_text_:c in 724) [ClassicSimilarity], result of:
          0.04042886 = score(doc=724,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=724)
      0.25 = coord(1/4)
    
  16. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.00892866 = product of:
      0.03571464 = sum of:
        0.03571464 = product of:
          0.07142928 = sum of:
            0.07142928 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07142928 = score(doc=1046,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  17. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.01
    0.008663327 = product of:
      0.03465331 = sum of:
        0.03465331 = weight(_text_:c in 2563) [ClassicSimilarity], result of:
          0.03465331 = score(doc=2563,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 2563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=2563)
      0.25 = coord(1/4)
    
  18. Hung, C.-M.; Chien, L.-F.: Web-based text classification in the absence of manually labeled training documents (2007) 0.01
    0.008663327 = product of:
      0.03465331 = sum of:
        0.03465331 = weight(_text_:c in 87) [ClassicSimilarity], result of:
          0.03465331 = score(doc=87,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 87, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=87)
      0.25 = coord(1/4)
    
  19. Montesi, M.; Navarrete, T.: Classifying web genres in context : A case study documenting the web genres used by a software engineer (2008) 0.01
    0.008663327 = product of:
      0.03465331 = sum of:
        0.03465331 = weight(_text_:c in 2100) [ClassicSimilarity], result of:
          0.03465331 = score(doc=2100,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 2100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=2100)
      0.25 = coord(1/4)
    
    Abstract
    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the analysis of the web browser history. In the first interview, he spoke about his general information behavior; in the second, he commented on each web genre, explaining why and how he used them. As a result, three approaches allow us to describe the set of 23 web genres obtained: (a) the purposes they serve for the participant; (b) the role they play in the various work and search phases; (c) and the way they are used in combination with each other. Further observations concern the way the participant assesses quality of web-based resources, and his information behavior as a software engineer.
  20. Sojka, P.; Lee, M.; Rehurek, R.; Hatlapatka, R.; Kucbel, M.; Bouche, T.; Goutorbe, C.; Anghelache, R.; Wojciechowski, K.: Toolset for entity and semantic associations : Final Release (2013) 0.01
    0.008663327 = product of:
      0.03465331 = sum of:
        0.03465331 = weight(_text_:c in 1057) [ClassicSimilarity], result of:
          0.03465331 = score(doc=1057,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.22866541 = fieldWeight in 1057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=1057)
      0.25 = coord(1/4)
    

Languages

  • d 38
  • e 37
  • a 1
  • More… Less…

Types

  • a 51
  • el 14
  • x 9
  • m 2
  • r 2
  • s 1
  • More… Less…