Search (46 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2010 TO 2020}
  1. Jersek, T.: Automatische DDC-Klassifizierung mit Lingo : Vorgehensweise und Ergebnisse (2012) 0.02
    0.015433615 = product of:
      0.08745715 = sum of:
        0.021680813 = weight(_text_:und in 122) [ClassicSimilarity], result of:
          0.021680813 = score(doc=122,freq=8.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.39180204 = fieldWeight in 122, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=122)
        0.021925448 = product of:
          0.043850895 = sum of:
            0.043850895 = weight(_text_:bibliothekswesen in 122) [ClassicSimilarity], result of:
              0.043850895 = score(doc=122,freq=2.0), product of:
                0.11129492 = queryWeight, product of:
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.024967048 = queryNorm
                0.39400625 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.457672 = idf(docFreq=1392, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.5 = coord(1/2)
        0.043850895 = weight(_text_:bibliothekswesen in 122) [ClassicSimilarity], result of:
          0.043850895 = score(doc=122,freq=2.0), product of:
            0.11129492 = queryWeight, product of:
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.024967048 = queryNorm
            0.39400625 = fieldWeight in 122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.457672 = idf(docFreq=1392, maxDocs=44218)
              0.0625 = fieldNorm(doc=122)
      0.1764706 = coord(3/17)
    
    Abstract
    Die Arbeit befasst sich mit der Realisierung und der Durchführung einer automatischen DDCKlassifizierung durch das Indexierungssystem Lingo. Dies geschieht durch die Einbeziehung von Relationen des DFG-Projektes CrissCross, anhand derer Lingo bibliographische Titeldatensätze automatisch klassifiziert. Der dabei verwendete Ansatz wird mit dem üblichen methodischen Vorgehen bei automatischen Klassifizierungssystemen verglichen. Das Klassifizierungsverfahren wird daraufhin anhand einer Testkollektion von bibliographischen Titeldatensätzen der Deutschen Nationalbibliothek (DNB) getestet. Es folgt eine Diskussion der Ergebnisse und eine Bewertung des Klassifizierungssystems.
    Content
    Diplomarbeit, Studiengang Bibliothekswesen, Fakultät für Informations- und Kommunikationswissenschaften, Fachhochschule Köln.
  2. Kasprzik, A.: Automatisierte und semiautomatisierte Klassifizierung : eine Analyse aktueller Projekte (2014) 0.01
    0.009141268 = product of:
      0.05180052 = sum of:
        0.022995977 = weight(_text_:und in 2470) [ClassicSimilarity], result of:
          0.022995977 = score(doc=2470,freq=16.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.41556883 = fieldWeight in 2470, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2470)
        0.005304256 = weight(_text_:in in 2470) [ClassicSimilarity], result of:
          0.005304256 = score(doc=2470,freq=6.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.1561842 = fieldWeight in 2470, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2470)
        0.023500286 = weight(_text_:bibliotheken in 2470) [ClassicSimilarity], result of:
          0.023500286 = score(doc=2470,freq=2.0), product of:
            0.09407886 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.024967048 = queryNorm
            0.24979347 = fieldWeight in 2470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.046875 = fieldNorm(doc=2470)
      0.1764706 = coord(3/17)
    
    Abstract
    Das sprunghafte Anwachsen der Menge digital verfügbarer Dokumente gepaart mit dem Zeit- und Personalmangel an wissenschaftlichen Bibliotheken legt den Einsatz von halb- oder vollautomatischen Verfahren für die verbale und klassifikatorische Inhaltserschließung nahe. Nach einer kurzen allgemeinen Einführung in die gängige Methodik beleuchtet dieser Artikel eine Reihe von Projekten zur automatisierten Klassifizierung aus dem Zeitraum 2007-2012 und aus dem deutschsprachigen Raum. Ein Großteil der vorgestellten Projekte verwendet Methoden des Maschinellen Lernens aus der Künstlichen Intelligenz, arbeitet meist mit angepassten Versionen einer kommerziellen Software und bezieht sich in der Regel auf die Dewey Decimal Classification (DDC). Als Datengrundlage dienen Metadatensätze, Abstracs, Inhaltsverzeichnisse und Volltexte in diversen Datenformaten. Die abschließende Analyse enthält eine Anordnung der Projekte nach einer Reihe von verschiedenen Kriterien und eine Zusammenfassung der aktuellen Lage und der größten Herausfordungen für automatisierte Klassifizierungsverfahren.
  3. Sommer, M.: Automatische Generierung von DDC-Notationen für Hochschulveröffentlichungen (2012) 0.00
    0.003065693 = product of:
      0.02605839 = sum of:
        0.022995977 = weight(_text_:und in 587) [ClassicSimilarity], result of:
          0.022995977 = score(doc=587,freq=16.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.41556883 = fieldWeight in 587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
        0.0030624135 = weight(_text_:in in 587) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=587,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=587)
      0.11764706 = coord(2/17)
    
    Abstract
    Das Thema dieser Bachelorarbeit ist die automatische Generierung von Notationen der Dewey-Dezimalklassifikation für Metadaten. Die Metadaten sind im Dublin-Core-Format und stammen vom Server für wissenschaftliche Schriften der Hochschule Hannover. Zu Beginn erfolgt eine allgemeine Einführung über die Methoden und Hauptanwendungsbereiche des automatischen Klassifizierens. Danach werden die Dewey-Dezimalklassifikation und der Prozess der Metadatengewinnung beschrieben. Der theoretische Teil endet mit der Beschreibung von zwei Projekten. In dem ersten Projekt wurde ebenfalls versucht Metadaten mit Notationen der Dewey-Dezimalklassifikation anzureichern. Das Ergebnis des zweiten Projekts ist eine Konkordanz zwischen der Schlagwortnormdatei und der Dewey-Dezimalklassifikation. Diese Konkordanz wurde im praktischen Teil dieser Arbeit dazu benutzt um automatisch Notationen der Dewey-Dezimalklassifikation zu vergeben.
    Content
    Vgl. unter: http://opus.bsz-bw.de/fhhv/volltexte/2012/397/pdf/Bachelorarbeit_final_Korrektur01.pdf. Bachelorarbeit, Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation, Studiengang Informationsmanagement
    Imprint
    Hannover : Hochschule Hannover, Fakultät III - Medien, Information und Design, Abteilung Information und Kommunikation
  4. Groß, T.; Faden, M.: Automatische Indexierung elektronischer Dokumente an der Deutschen Zentralbibliothek für Wirtschaftswissenschaften : Bericht über die Jahrestagung der Internationalen Buchwissenschaftlichen Gesellschaft (2010) 0.00
    0.0030587008 = product of:
      0.025998957 = sum of:
        0.019542823 = weight(_text_:und in 4051) [ClassicSimilarity], result of:
          0.019542823 = score(doc=4051,freq=26.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.3531656 = fieldWeight in 4051, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.0064561353 = weight(_text_:in in 4051) [ClassicSimilarity], result of:
          0.0064561353 = score(doc=4051,freq=20.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.19010136 = fieldWeight in 4051, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
      0.11764706 = coord(2/17)
    
    Abstract
    Die zunehmende Verfügbarmachung digitaler Informationen in den letzten Jahren sowie die Aussicht auf ein weiteres Ansteigen der sogenannten Datenflut kumulieren in einem grundlegenden, sich weiter verstärkenden Informationsstrukturierungsproblem. Die stetige Zunahme von digitalen Informationsressourcen im World Wide Web sichert zwar jederzeit und ortsungebunden den Zugriff auf verschiedene Informationen; offen bleibt der strukturierte Zugang, insbesondere zu wissenschaftlichen Ressourcen. Angesichts der steigenden Anzahl elektronischer Inhalte und vor dem Hintergrund stagnierender bzw. knapper werdender personeller Ressourcen in der Sacherschließun schafft keine Bibliothek bzw. kein Bibliotheksverbund es mehr, weder aktuell noch zukünftig, alle digitalen Daten zu erfassen, zu strukturieren und zueinander in Beziehung zu setzen. In der Informationsgesellschaft des 21. Jahrhunderts wird es aber zunehmend wichtiger, die in der Flut verschwundenen wissenschaftlichen Informationen zeitnah, angemessen und vollständig zu strukturieren und somit als Basis für eine Wissensgenerierung wieder nutzbar zu machen. Eine normierte Inhaltserschließung digitaler Informationsressourcen ist deshalb für die Deutsche Zentralbibliothek für Wirtschaftswissenschaften (ZBW) als wichtige Informationsinfrastruktureinrichtung in diesem Bereich ein entscheidender und auch erfolgskritischer Aspekt im Wettbewerb mit anderen Informationsdienstleistern. Weil die traditionelle intellektuelle Sacherschließung aber nicht beliebig skalierbar ist - mit dem Anstieg der Zahl an Online-Dokumenten steigt proportional auch der personelle Ressourcenbedarf an Fachreferenten, wenn ein gewisser Qualitätsstandard gehalten werden soll - bedarf es zukünftig anderer Sacherschließungsverfahren. Automatisierte Verschlagwortungsmethoden werden dabei als einzige Möglichkeit angesehen, die bibliothekarische Sacherschließung auch im digitalen Zeitalter zukunftsfest auszugestalten. Zudem können maschinelle Ansätze dazu beitragen, die Heterogenitäten (Indexierungsinkonsistenzen) zwischen den einzelnen Sacherschließer zu nivellieren, und somit zu einer homogeneren Erschließung des Bibliotheksbestandes beitragen.
    Mit der Anfang 2010 begonnen Implementierung und Ergebnisevaluierung des automatischen Indexierungsverfahrens "Decisiv Categorization" der Firma Recommind soll das hier skizzierte Informationsstrukturierungsproblem in zwei Schritten gelöst werden. Kurz- bis mittelfristig soll die intellektuelle Indexierung durch ein semiautomatisches Verfahren6 unterstützt werden. Mittel- bis langfristig soll das maschinelle Verfahren, aufbauend auf einem entsprechenden Training, in die Lage versetzt werden, sowohl im Hause vorliegende Dokumente vollautomatisch zu indexieren als auch ZBW-fremde digitale Informationsressourcen zu verschlagworten bzw. zu klassifizieren, um sie in einem gemeinsamen Suchraum auffindbar machen zu können. Im Anschluss an diese Einleitung werden die ersten Ansätze maschineller Sacherschließung an der ZBW (2001-2004) und deren Ergebnisse und Problemlagen aufgezeigt. Danach werden die Rahmenbedingungen (Projektauftrag und -ziel) für eine Wiederaufnahme des Vorhabens im Jahre 2009 aufgezeigt, gefolgt von einer Darstellung der Funktionsweise der Recommind-Technologie und deren Einsatz im Rahmen der Sacherschließung von Online-Dokumenten mit einem Thesaurus. Schwerpunkt dieser Abhandlung bilden im Anschluss daran die Evaluierungsmöglichkeiten automatischer Indexierungsansätze sowie die aktuellen Ergebnisse und zentralen Erkenntnisse des Einsatzes im Kontext der ZBW. Das Fazit beschreibt die entsprechenden Schlussfolgerungen aus den erzielten Ergebnissen sowie den Ausblick auf das weitere Vorgehen.
  5. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    0.0025902912 = product of:
      0.022017475 = sum of:
        0.005104023 = weight(_text_:in in 2748) [ClassicSimilarity], result of:
          0.005104023 = score(doc=2748,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.15028831 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.016913453 = product of:
          0.033826906 = sum of:
            0.033826906 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.033826906 = score(doc=2748,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  6. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.00
    0.0018179208 = product of:
      0.015452327 = sum of:
        0.005304256 = weight(_text_:in in 2158) [ClassicSimilarity], result of:
          0.005304256 = score(doc=2158,freq=6.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.1561842 = fieldWeight in 2158, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.010148071 = product of:
          0.020296142 = sum of:
            0.020296142 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.020296142 = score(doc=2158,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
  7. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.00
    0.0016662586 = product of:
      0.014163198 = sum of:
        0.005706471 = weight(_text_:in in 1107) [ClassicSimilarity], result of:
          0.005706471 = score(doc=1107,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.16802745 = fieldWeight in 1107, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.008456727 = product of:
          0.016913453 = sum of:
            0.016913453 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.016913453 = score(doc=1107,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  8. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0015541747 = product of:
      0.013210485 = sum of:
        0.0030624135 = weight(_text_:in in 690) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=690,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.010148071 = product of:
          0.020296142 = sum of:
            0.020296142 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.020296142 = score(doc=690,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  9. Barbu, E.: What kind of knowledge is in Wikipedia? : unsupervised extraction of properties for similar concepts (2014) 0.00
    5.4042594E-4 = product of:
      0.009187241 = sum of:
        0.009187241 = weight(_text_:in in 1547) [ClassicSimilarity], result of:
          0.009187241 = score(doc=1547,freq=18.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.27051896 = fieldWeight in 1547, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
      0.05882353 = coord(1/17)
    
    Abstract
    This article presents a novel method for extracting knowledge from Wikipedia and a classification schema for annotating the extracted knowledge. Unlike the majority of approaches in the literature, we use the raw Wikipedia text for knowledge acquisition. The main assumption made is that the concepts classified under the same node in a taxonomy are described in a comparable way in Wikipedia. The annotation of the extracted knowledge is done at two levels: ontological and logical. The extracted properties are evaluated in the traditional way, that is, by computing the precision of the extraction procedure and in a clustering task. The second method of evaluation is seldom used in the natural language processing community, but it is regularly employed in cognitive psychology.
  10. Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories (2015) 0.00
    4.9788615E-4 = product of:
      0.008464064 = sum of:
        0.008464064 = weight(_text_:in in 2194) [ClassicSimilarity], result of:
          0.008464064 = score(doc=2194,freq=22.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.24922498 = fieldWeight in 2194, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2194)
      0.05882353 = coord(1/17)
    
    Abstract
    In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
  11. Ru, C.; Tang, J.; Li, S.; Xie, S.; Wang, T.: Using semantic similarity to reduce wrong labels in distant supervision for relation extraction (2018) 0.00
    4.9788615E-4 = product of:
      0.008464064 = sum of:
        0.008464064 = weight(_text_:in in 5055) [ClassicSimilarity], result of:
          0.008464064 = score(doc=5055,freq=22.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.24922498 = fieldWeight in 5055, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
      0.05882353 = coord(1/17)
    
    Abstract
    Distant supervision (DS) has the advantage of automatically generating large amounts of labelled training data and has been widely used for relation extraction. However, there are usually many wrong labels in the automatically labelled data in distant supervision (Riedel, Yao, & McCallum, 2010). This paper presents a novel method to reduce the wrong labels. The proposed method uses the semantic Jaccard with word embedding to measure the semantic similarity between the relation phrase in the knowledge base and the dependency phrases between two entities in a sentence to filter the wrong labels. In the process of reducing wrong labels, the semantic Jaccard algorithm selects a core dependency phrase to represent the candidate relation in a sentence, which can capture features for relation classification and avoid the negative impact from irrelevant term sequences that previous neural network models of relation extraction often suffer. In the process of relation classification, the core dependency phrases are also used as the input of a convolutional neural network (CNN) for relation classification. The experimental results show that compared with the methods using original DS data, the methods using filtered DS data performed much better in relation extraction. It indicates that the semantic similarity based method is effective in reducing wrong labels. The relation extraction performance of the CNN model using the core dependency phrases as input is the best of all, which indicates that using the core dependency phrases as input of CNN is enough to capture the features for relation classification and could avoid negative impact from irrelevant terms.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.00
    4.245987E-4 = product of:
      0.007218178 = sum of:
        0.007218178 = weight(_text_:in in 2300) [ClassicSimilarity], result of:
          0.007218178 = score(doc=2300,freq=16.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.21253976 = fieldWeight in 2300, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2300)
      0.05882353 = coord(1/17)
    
    Abstract
    Subject terms play a crucial role in resource discovery but require substantial effort to produce. Automatic subject classification and indexing address problems of scale and sustainability and can be used to enrich existing bibliographic records, establish more connections across and between resources and enhance consistency of bibliographic data. The paper aims to put forward a complex methodological framework to evaluate automatic classification tools of Swedish textual documents based on the Dewey Decimal Classification (DDC) recently introduced to Swedish libraries. Three major complementary approaches are suggested: a quality-built gold standard, retrieval effects, domain analysis. The gold standard is built based on input from at least two catalogue librarians, end-users expert in the subject, end users inexperienced in the subject and automated tools. Retrieval effects are studied through a combination of assigned and free tasks, including factual and comprehensive types. The study also takes into consideration the different role and character of subject terms in various knowledge domains, such as scientific disciplines. As a theoretical framework, domain analysis is used and applied in relation to the implementation of DDC in Swedish libraries and chosen domains of knowledge within the DDC itself.
  13. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.00
    4.245987E-4 = product of:
      0.007218178 = sum of:
        0.007218178 = weight(_text_:in in 3627) [ClassicSimilarity], result of:
          0.007218178 = score(doc=3627,freq=16.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.21253976 = fieldWeight in 3627, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.05882353 = coord(1/17)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
    Content
    Beitrag in einem Special Issue "New Trends for Knowledge Organization, Guest Editor: Renato Rocha Souza".
  14. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.00
    4.0280976E-4 = product of:
      0.0068477658 = sum of:
        0.0068477658 = weight(_text_:in in 4758) [ClassicSimilarity], result of:
          0.0068477658 = score(doc=4758,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.20163295 = fieldWeight in 4758, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.05882353 = coord(1/17)
    
    Abstract
    In this article, we present a study about classification methods for large-scale categorization of product offers on e-shopping web sites. We present a study about the performance of previously proposed approaches and deployed a probabilistic approach to model the classification problem. We also studied an alternative way of modeling information about the description of product offers and investigated the usage of price and store of product offers as features adopted in the classification process. Our experiments used two collections of over a million product offers previously categorized by human editors and taxonomies of hundreds of categories from a real e-shopping web site. In these experiments, our method achieved an improvement of up to 9% in the quality of the categorization in comparison with the best baseline we have found.
  15. Desale, S.K.; Kumbhar, R.: Research on automatic classification of documents in library environment : a literature review (2013) 0.00
    4.0280976E-4 = product of:
      0.0068477658 = sum of:
        0.0068477658 = weight(_text_:in in 1071) [ClassicSimilarity], result of:
          0.0068477658 = score(doc=1071,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.20163295 = fieldWeight in 1071, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1071)
      0.05882353 = coord(1/17)
    
    Abstract
    This paper aims to provide an overview of automatic classification research, which focuses on issues related to the automatic classification of documents in a library environment. The review covers literature published in mainstream library and information science studies. The review was done on literature published in both academic and professional LIS journals and other documents. This review reveals that basically three types of research are being done on automatic classification: 1) hierarchical classification using different library classification schemes, 2) text categorization and document categorization using different type of classifiers with or without using training documents, and 3) automatic bibliographic classification. Predominantly this research is directed towards solving problems of organization of digital documents in an online environment. However, very little research is devoted towards solving the problems of arrangement of physical documents.
  16. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.00
    4.0280976E-4 = product of:
      0.0068477658 = sum of:
        0.0068477658 = weight(_text_:in in 2339) [ClassicSimilarity], result of:
          0.0068477658 = score(doc=2339,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.20163295 = fieldWeight in 2339, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
      0.05882353 = coord(1/17)
    
    Abstract
    Text classification (TC) is a core technique for text mining and information retrieval. It has been applied to many applications in many different research and industrial areas. Term-weighting schemes assign an appropriate weight to each term to obtain a high TC performance. Although term weighting is one of the important modules for TC and TC has different peculiarities from those in information retrieval, many term-weighting schemes used in information retrieval, such as term frequency-inverse document frequency (tf-idf), have been used in TC in the same manner. The peculiarity of TC that differs most from information retrieval is the existence of class information. This article proposes a new term-weighting scheme that uses class information using positive and negative class distributions. As a result, the proposed scheme, log tf-TRR, consistently performs better than do other schemes using class information as well as traditional schemes such as tf-idf.
  17. Billal, B.; Fonseca, A.; Sadat, F.; Lounis, H.: Semi-supervised learning and social media text analysis towards multi-labeling categorization (2017) 0.00
    3.7977265E-4 = product of:
      0.0064561353 = sum of:
        0.0064561353 = weight(_text_:in in 4095) [ClassicSimilarity], result of:
          0.0064561353 = score(doc=4095,freq=20.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.19010136 = fieldWeight in 4095, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4095)
      0.05882353 = coord(1/17)
    
    Abstract
    In traditional text classification, classes are mutually exclusive, i.e. it is not possible to have one text or text fragment classified into more than one class. On the other hand, in multi-label classification an individual text may belong to several classes simultaneously. This type of classification is required by a large number of current applications such as big data classification, images and video annotation. Supervised learning is the most used type of machine learning in the classification task. It requires large quantities of labeled data and the intervention of a human tagger in the creation of the training sets. When the data sets become very large or heavily noisy, this operation can be tedious, prone to error and time consuming. In this case, semi-supervised learning, which requires only few labels, is a better choice. In this paper, we study and evaluate several methods to address the problem of multi-label classification using semi-supervised learning and data from social networks. First, we propose a linguistic pre-processing involving tokeni-sation, recognition of named entities and hashtag segmentation in order to decrease the noise in this type of massive and unstructured real data and then we perform a word sense disambiguation using WordNet. Second, several experiments related to multi-label classification and semi-supervised learning are carried out on these data sets and compared to each other. These evaluations compare the results of the approaches considered. This paper proposes a method for combining semi-supervised methods with a graph method for the extraction of subjects in social networks using a multi-label classification approach. Experiments show that the performance of the proposed model increases in 4 p.p. the precision of the classification when compared to a baseline.
  18. Altinel, B.; Ganiz, M.C.: Semantic text classification : a survey of past and recent advances (2018) 0.00
    3.7977265E-4 = product of:
      0.0064561353 = sum of:
        0.0064561353 = weight(_text_:in in 5051) [ClassicSimilarity], result of:
          0.0064561353 = score(doc=5051,freq=20.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.19010136 = fieldWeight in 5051, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5051)
      0.05882353 = coord(1/17)
    
    Abstract
    Automatic text classification is the task of organizing documents into pre-determined classes, generally using machine learning algorithms. Generally speaking, it is one of the most important methods to organize and make use of the gigantic amounts of information that exist in unstructured textual format. Text classification is a widely studied research area of language processing and text mining. In traditional text classification, a document is represented as a bag of words where the words in other words terms are cut from their finer context i.e. their location in a sentence or in a document. Only the broader context of document is used with some type of term frequency information in the vector space. Consequently, semantics of words that can be inferred from the finer context of its location in a sentence and its relations with neighboring words are usually ignored. However, meaning of words, semantic connections between words, documents and even classes are obviously important since methods that capture semantics generally reach better classification performances. Several surveys have been published to analyze diverse approaches for the traditional text classification methods. Most of these surveys cover application of different semantic term relatedness methods in text classification up to a certain degree. However, they do not specifically target semantic text classification algorithms and their advantages over the traditional text classification. In order to fill this gap, we undertake a comprehensive discussion of semantic text classification vs. traditional text classification. This survey explores the past and recent advancements in semantic text classification and attempts to organize existing approaches under five fundamental categories; domain knowledge-based approaches, corpus-based approaches, deep learning based approaches, word/character sequence enhanced approaches and linguistic enriched approaches. Furthermore, this survey highlights the advantages of semantic text classification algorithms over the traditional text classification algorithms.
  19. Kishida, K.: High-speed rough clustering for very large document collections (2010) 0.00
    3.6771328E-4 = product of:
      0.0062511256 = sum of:
        0.0062511256 = weight(_text_:in in 3463) [ClassicSimilarity], result of:
          0.0062511256 = score(doc=3463,freq=12.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.18406484 = fieldWeight in 3463, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3463)
      0.05882353 = coord(1/17)
    
    Abstract
    Document clustering is an important tool, but it is not yet widely used in practice probably because of its high computational complexity. This article explores techniques of high-speed rough clustering of documents, assuming that it is sometimes necessary to obtain a clustering result in a shorter time, although the result is just an approximate outline of document clusters. A promising approach for such clustering is to reduce the number of documents to be checked for generating cluster vectors in the leader-follower clustering algorithm. Based on this idea, the present article proposes a modified Crouch algorithm and incomplete single-pass leader-follower algorithm. Also, a two-stage grouping technique, in which the first stage attempts to decrease the number of documents to be processed in the second stage by applying a quick merging technique, is developed. An experiment using a part of the Reuters corpus RCV1 showed empirically that both the modified Crouch and the incomplete single-pass leader-follower algorithms achieve clustering results more efficiently than the original methods, and also improved the effectiveness of clustering results. On the other hand, the two-stage grouping technique did not reduce the processing time in this experiment.
  20. HaCohen-Kerner, Y.; Beck, H.; Yehudai, E.; Rosenstein, M.; Mughaz, D.: Cuisine : classification using stylistic feature sets and/or name-based feature sets (2010) 0.00
    3.6771328E-4 = product of:
      0.0062511256 = sum of:
        0.0062511256 = weight(_text_:in in 3706) [ClassicSimilarity], result of:
          0.0062511256 = score(doc=3706,freq=12.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.18406484 = fieldWeight in 3706, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3706)
      0.05882353 = coord(1/17)
    
    Abstract
    Document classification presents challenges due to the large number of features, their dependencies, and the large number of training documents. In this research, we investigated the use of six stylistic feature sets (including 42 features) and/or six name-based feature sets (including 234 features) for various combinations of the following classification tasks: ethnic groups of the authors and/or periods of time when the documents were written and/or places where the documents were written. The investigated corpus contains Jewish Law articles written in Hebrew-Aramaic, which present interesting problems for classification. Our system CUISINE (Classification UsIng Stylistic feature sets and/or NamE-based feature sets) achieves accuracy results between 90.71 to 98.99% for the seven classification experiments (ethnicity, time, place, ethnicity&time, ethnicity&place, time&place, ethnicity&time&place). For the first six tasks, the stylistic feature sets in general and the quantitative feature set in particular are enough for excellent classification results. In contrast, the name-based feature sets are rather poor for these tasks. However, for the most complex task (ethnicity&time&place), a hill-climbing model using all feature sets succeeds in significantly improving the classification results. Most of the stylistic features (34 of 42) are language-independent and domain-independent. These features might be useful to the community at large, at least for rather simple tasks.

Languages

  • e 42
  • d 4

Types

  • a 42
  • el 2
  • x 2
  • s 1
  • More… Less…