Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 28. April 2022)
1Levin, M. ; Krawczyk, S. ; Bethard, S. ; Jurafsky, D.: Citation-based bootstrapping for large-scale author disambiguation.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.1030-1047.
Abstract: We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first "bootstrap" stage, a collection of high-precision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our self-supervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters' Web of Knowledge with B3 F1 of.807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from.748 to.767) suggests they may not be worth the trouble of extracting from databases that don't already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F1.
Themenfeld: Informetrie ; Computerlinguistik
2Jurafsky, D. ; Martin, J.H.: Speech and language processing : ani ntroduction to natural language processing, computational linguistics and speech recognition.2nd ed.
Upper Saddle River, NJ : Prentice Hall, 2009. 1024 S.
(Prentice Hall series in artificial intelligence)
Abstract: For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material.
LCSH: Computational linguistics / Automatic speech recognition
RSWK: Computerlinguistik / Automatische Spracherkennung / Lehrbuch
BK: 54.75 Sprachverarbeitung Informatik ; 17.46 Mathematische Linguistik ; 18.00 Einzelne Sprachen und Literaturen allgemein
GHBS: TVV (DU) ; TZF (DU) ; BFP (DU) ; DKF (E) ; BFN (E)
RVK: ES 900 ; ST 306