Search (294 results, page 1 of 15)

  • × theme_ss:"Automatisches Indexieren"
  1. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.08
    0.08340733 = product of:
      0.25022197 = sum of:
        0.016184142 = weight(_text_:information in 6752) [ClassicSimilarity], result of:
          0.016184142 = score(doc=6752,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.2687516 = fieldWeight in 6752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.18536991 = weight(_text_:extraction in 6752) [ClassicSimilarity], result of:
          0.18536991 = score(doc=6752,freq=6.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.9095484 = fieldWeight in 6752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.030077105 = weight(_text_:system in 6752) [ClassicSimilarity], result of:
          0.030077105 = score(doc=6752,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.27838376 = fieldWeight in 6752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.01859081 = product of:
          0.03718162 = sum of:
            0.03718162 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.03718162 = score(doc=6752,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  2. Vlachidis, A.; Tudhope, D.: ¬A knowledge-based approach to information extraction for semantic interoperability in the archaeology domain (2016) 0.05
    0.045011945 = product of:
      0.18004778 = sum of:
        0.011679897 = weight(_text_:information in 2895) [ClassicSimilarity], result of:
          0.011679897 = score(doc=2895,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.19395474 = fieldWeight in 2895, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
        0.14956969 = weight(_text_:extraction in 2895) [ClassicSimilarity], result of:
          0.14956969 = score(doc=2895,freq=10.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.7338887 = fieldWeight in 2895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
        0.018798191 = weight(_text_:system in 2895) [ClassicSimilarity], result of:
          0.018798191 = score(doc=2895,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.17398985 = fieldWeight in 2895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2895)
      0.25 = coord(3/12)
    
    Abstract
    The article presents a method for automatic semantic indexing of archaeological grey-literature reports using empirical (rule-based) Information Extraction techniques in combination with domain-specific knowledge organization systems. The semantic annotation system (OPTIMA) performs the tasks of Named Entity Recognition, Relation Extraction, Negation Detection, and Word-Sense Disambiguation using hand-crafted rules and terminological resources for associating contextual abstractions with classes of the standard ontology CIDOC Conceptual Reference Model (CRM) for cultural heritage and its archaeological extension, CRM-EH. Relation Extraction (RE) performance benefits from a syntactic-based definition of RE patterns derived from domain oriented corpus analysis. The evaluation also shows clear benefit in the use of assistive natural language processing (NLP) modules relating to Word-Sense Disambiguation, Negation Detection, and Noun Phrase Validation, together with controlled thesaurus expansion. The semantic indexing results demonstrate the capacity of rule-based Information Extraction techniques to deliver interoperable semantic abstractions (semantic annotations) with respect to the CIDOC CRM and archaeological thesauri. Major contributions include recognition of relevant entities using shallow parsing NLP techniques driven by a complimentary use of ontological and terminological domain resources and empirical derivation of context-driven RE rules for the recognition of semantic relationships from phrases of unstructured text.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.5, S.1138-1152
  3. Renouf, A.: Making sense of text : automated approaches to meaning extraction (1993) 0.04
    0.035069317 = product of:
      0.21041588 = sum of:
        0.023125017 = weight(_text_:information in 7111) [ClassicSimilarity], result of:
          0.023125017 = score(doc=7111,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3840108 = fieldWeight in 7111, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=7111)
        0.18729086 = weight(_text_:extraction in 7111) [ClassicSimilarity], result of:
          0.18729086 = score(doc=7111,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.9189739 = fieldWeight in 7111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.109375 = fieldNorm(doc=7111)
      0.16666667 = coord(2/12)
    
    Imprint
    Oxford : Learned Information
    Source
    Online information 93: 17th International Online Meeting Proceedings, London, 7.-9.12.1993. Ed. by D.I. Raitt et al
  4. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.03
    0.03387143 = product of:
      0.13548572 = sum of:
        0.014304894 = weight(_text_:information in 6029) [ClassicSimilarity], result of:
          0.014304894 = score(doc=6029,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23754507 = fieldWeight in 6029, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6029)
        0.09459618 = weight(_text_:extraction in 6029) [ClassicSimilarity], result of:
          0.09459618 = score(doc=6029,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.46415195 = fieldWeight in 6029, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6029)
        0.026584659 = weight(_text_:system in 6029) [ClassicSimilarity], result of:
          0.026584659 = score(doc=6029,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.24605882 = fieldWeight in 6029, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6029)
      0.25 = coord(3/12)
    
    Abstract
    Over the past few years, temporal information processing and temporal database management have increasingly become hot topics. Nevertheless, only a few researchers have investigated these areas in the Chinese language. This lays down the objective of our research: to exploit Chinese language processing techniques for temporal information extraction and concept reasoning. In this article, we first study the mechanism for expressing time in Chinese. On the basis of the study, we then design a general frame structure for maintaining the extracted temporal concepts and propose a system for extracting time-dependent information from Hong Kong financial news. In the system, temporal knowledge is represented by different types of temporal concepts (TTC) and different temporal relations, including absolute and relative relations, which are used to correlate between action times and reference times. In analyzing a sentence, the algorithm first determines the situation related to the verb. This in turn will identify the type of temporal concept associated with the verb. After that, the relevant temporal information is extracted and the temporal relations are derived. These relations link relevant concept frames together in chronological order, which in turn provide the knowledge to fulfill users' queries, e.g., for question-answering (i.e., Q&A) applications
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.9, S.748-762
  5. Roberts, D.; Souter, C.: ¬The automation of controlled vocabulary subject indexing of medical journal articles (2000) 0.03
    0.033097778 = product of:
      0.13239111 = sum of:
        0.0070079383 = weight(_text_:information in 711) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=711,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=711)
        0.08026751 = weight(_text_:extraction in 711) [ClassicSimilarity], result of:
          0.08026751 = score(doc=711,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.39384598 = fieldWeight in 711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.046875 = fieldNorm(doc=711)
        0.04511566 = weight(_text_:system in 711) [ClassicSimilarity], result of:
          0.04511566 = score(doc=711,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.41757566 = fieldWeight in 711, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=711)
      0.25 = coord(3/12)
    
    Abstract
    This article discusses the possibility of the automation of sophisticated subject indexing of medical journal articles. Approaches to subject descriptor assignment in information retrieval research are usually either based upon the manual descriptors in the database or generation of search parameters from the text of the article. The principles of the Medline indexing system are described, followed by a summary of a pilot project, based upon the Amed database. The results suggest that a more extended study, based upon Medline, should encompass various components: Extraction of 'concept strings' from titles and abstracts of records, based upon linguistic features characteristic of medical literature. Use of the Unified Medical Language System (UMLS) for identification of controlled vocabulary descriptors. Coordination of descriptors, utilising features of the Medline indexing system. The emphasis should be on system manipulation of data, based upon input, available resources and specifically designed rules.
  6. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.03
    0.027518107 = product of:
      0.11007243 = sum of:
        0.008092071 = weight(_text_:information in 1767) [ClassicSimilarity], result of:
          0.008092071 = score(doc=1767,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1343758 = fieldWeight in 1767, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.092684954 = weight(_text_:extraction in 1767) [ClassicSimilarity], result of:
          0.092684954 = score(doc=1767,freq=6.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.4547742 = fieldWeight in 1767, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03125 = fieldNorm(doc=1767)
        0.009295405 = product of:
          0.01859081 = sum of:
            0.01859081 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.01859081 = score(doc=1767,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
    Im fünften Kapitel "Information Extraction" geht Nohr auf eine Problemstellung ein, die in der Fachwelt eine noch stärkere Betonung verdiente: "Die stetig ansteigende Zahl elektronischer Dokumente macht neben einer automatischen Erschließung auch eine automatische Gewinnung der relevanten Informationen aus diesen Dokumenten wünschenswert, um diese z.B. für weitere Bearbeitungen oder Auswertungen in betriebliche Informationssysteme übernehmen zu können." (S. 103) "Indexierung und Retrievalverfahren" als voneinander abhängige Verfahren werden im sechsten Kapitel behandelt. Hier stehen Relevance Ranking und Relevance Feedback sowie die Anwendung informationslinguistischer Verfahren in der Recherche im Mittelpunkt. Die "Evaluation automatischer Indexierung" setzt den thematischen Schlusspunkt. Hier geht es vor allem um die Oualität einer Indexierung, um gängige Retrievalmaße in Retrievaltest und deren Einssatz. Weiterhin ist hervorzuheben, dass jedes Kapitel durch die Vorgabe von Lernzielen eingeleitet wird und zu den jeweiligen Kapiteln (im hinteren Teil des Buches) einige Kontrollfragen gestellt werden. Die sehr zahlreichen Beispiele aus der Praxis, ein Abkürzungsverzeichnis und ein Sachregister erhöhen den Nutzwert des Buches. Die Lektüre förderte beim Rezensenten das Verständnis für die Zusammenhänge von BID-Handwerkzeug, Wirtschaftsinformatik (insbesondere Data Warehousing) und Künstlicher Intelligenz. Die "Grundlagen der automatischen Indexierung" sollte auch in den bibliothekarischen Studiengängen zur Pflichtlektüre gehören. Holger Nohrs Lehrbuch ist auch für den BID-Profi geeignet, um die mehr oder weniger fundierten Kenntnisse auf dem Gebiet "automatisches Indexieren" schnell, leicht verständlich und informativ aufzufrischen."
  7. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.03
    0.026781835 = product of:
      0.16069101 = sum of:
        0.02825637 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.02825637 = score(doc=2930,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.13243464 = weight(_text_:extraction in 2930) [ClassicSimilarity], result of:
          0.13243464 = score(doc=2930,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.6498127 = fieldWeight in 2930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.16666667 = coord(2/12)
    
    Abstract
    We present a framework for keyphrase extraction from scientific journals in diverse research fields. While journal articles are often provided with manually assigned keywords, it is not clear how to automatically extract keywords and measure their significance for a set of journal articles. We compare extracted keyphrases from journals in the fields of astrophysics, mathematics, physics, and computer science. We show that the presented statistics-based framework is able to demonstrate differences among journals, and that the extracted keyphrases can be used to represent journal or conference research topics, dynamics, and specificity.
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  8. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.03
    0.025901606 = product of:
      0.15540963 = sum of:
        0.0058399485 = weight(_text_:information in 1842) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=1842,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 1842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1842)
        0.14956969 = weight(_text_:extraction in 1842) [ClassicSimilarity], result of:
          0.14956969 = score(doc=1842,freq=10.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.7338887 = fieldWeight in 1842, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1842)
      0.16666667 = coord(2/12)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
  9. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.03
    0.02530464 = product of:
      0.10121856 = sum of:
        0.0070079383 = weight(_text_:information in 1746) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=1746,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 1746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1746)
        0.08026751 = weight(_text_:extraction in 1746) [ClassicSimilarity], result of:
          0.08026751 = score(doc=1746,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.39384598 = fieldWeight in 1746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.046875 = fieldNorm(doc=1746)
        0.013943106 = product of:
          0.027886212 = sum of:
            0.027886212 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.027886212 = score(doc=1746,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Im Rahmen dieser Arbeit wird eine Vorgehensweise entwickelt, die die Fixierung auf das Wort und die damit verbundenen Schwächen überwindet. Sie gestattet die Extraktion von Informationen anhand der repräsentierten Begriffe und bildet damit die Basis einer inhaltlichen Texterschließung. Die anschließende prototypische Realisierung dient dazu, die Konzeption zu überprüfen sowie ihre Möglichkeiten und Grenzen abzuschätzen und zu bewerten. Arbeiten zum Information Extraction widmen sich fast ausschließlich dem Englischen, wobei insbesondere im Bereich der Named Entities sehr gute Ergebnisse erzielt werden. Deutlich schlechter sehen die Resultate für weniger regelmäßige Sprachen wie beispielsweise das Deutsche aus. Aus diesem Grund sowie praktischen Erwägungen wie insbesondere der Vertrautheit des Autors damit, soll diese Sprache primär Gegenstand der Untersuchungen sein. Die Lösung von einer engen Termorientierung bei gleichzeitiger Betonung der repräsentierten Begriffe legt nahe, dass nicht nur die verwendeten Worte sekundär werden sondern auch die verwendete Sprache. Um den Rahmen dieser Arbeit nicht zu sprengen wird bei der Untersuchung dieses Punktes das Augenmerk vor allem auf die mit unterschiedlichen Sprachen verbundenen Schwierigkeiten und Besonderheiten gelegt.
    Date
    22. 3.2015 9:17:30
  10. Experimentelles und praktisches Information Retrieval : Festschrift für Gerhard Lustig (1992) 0.02
    0.024121879 = product of:
      0.096487515 = sum of:
        0.017165873 = weight(_text_:information in 4) [ClassicSimilarity], result of:
          0.017165873 = score(doc=4,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.2850541 = fieldWeight in 4, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4)
        0.056763813 = weight(_text_:suche in 4) [ClassicSimilarity], result of:
          0.056763813 = score(doc=4,freq=2.0), product of:
            0.17138755 = queryWeight, product of:
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.03430388 = queryNorm
            0.3312015 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.996156 = idf(docFreq=812, maxDocs=44218)
              0.046875 = fieldNorm(doc=4)
        0.02255783 = weight(_text_:system in 4) [ClassicSimilarity], result of:
          0.02255783 = score(doc=4,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.20878783 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4)
      0.25 = coord(3/12)
    
    Content
    Enthält die Beiträge: SALTON, G.: Effective text understanding in information retrieval; KRAUSE, J.: Intelligentes Information retrieval; FUHR, N.: Konzepte zur Gestaltung zukünftiger Information-Retrieval-Systeme; HÜTHER, H.: Überlegungen zu einem mathematischen Modell für die Type-Token-, die Grundform-Token und die Grundform-Type-Relation; KNORZ, G.: Automatische Generierung inferentieller Links in und zwischen Hyperdokumenten; KONRAD, E.: Zur Effektivitätsbewertung von Information-Retrieval-Systemen; HENRICHS, N.: Retrievalunterstützung durch automatisch generierte Wortfelder; LÜCK, W., W. RITTBERGER u. M. SCHWANTNER: Der Einsatz des Automatischen Indexierungs- und Retrieval-System (AIR) im Fachinformationszentrum Karlsruhe; REIMER, U.: Verfahren der Automatischen Indexierung. Benötigtes Vorwissen und Ansätze zu seiner automatischen Akquisition: Ein Überblick; ENDRES-NIGGEMEYER, B.: Dokumentrepräsentation: Ein individuelles prozedurales Modell des Abstracting, des Indexierens und Klassifizierens; SEELBACH, D.: Zur Entwicklung von zwei- und mehrsprachigen lexikalischen Datenbanken und Terminologiedatenbanken; ZIMMERMANN, H.: Der Einfluß der Sprachbarrieren in Europa und Möglichkeiten zu ihrer Minderung; LENDERS, W.: Wörter zwischen Welt und Wissen; PANYR, J.: Frames, Thesauri und automatische Klassifikation (Clusteranalyse): HAHN, U.: Forschungsstrategien und Erkenntnisinteressen in der anwendungsorientierten automatischen Sprachverarbeitung. Überlegungen zu einer ingenieurorientierten Computerlinguistik; KUHLEN, R.: Hypertext und Information Retrieval - mehr als Browsing und Suche.
  11. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.02
    0.023435093 = product of:
      0.14061056 = sum of:
        0.008175928 = weight(_text_:information in 5481) [ClassicSimilarity], result of:
          0.008175928 = score(doc=5481,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 5481, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
        0.13243464 = weight(_text_:extraction in 5481) [ClassicSimilarity], result of:
          0.13243464 = score(doc=5481,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.6498127 = fieldWeight in 5481, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
      0.16666667 = coord(2/12)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
  12. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.02
    0.023269854 = product of:
      0.13961913 = sum of:
        0.0058399485 = weight(_text_:information in 5816) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=5816,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 5816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
        0.13377918 = weight(_text_:extraction in 5816) [ClassicSimilarity], result of:
          0.13377918 = score(doc=5816,freq=8.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.6564099 = fieldWeight in 5816, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.16666667 = coord(2/12)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.5, S.553-567
  13. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.02
    0.023228165 = product of:
      0.09291266 = sum of:
        0.02018312 = weight(_text_:web in 3627) [ClassicSimilarity], result of:
          0.02018312 = score(doc=3627,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.18028519 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
        0.0058399485 = weight(_text_:information in 3627) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=3627,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
        0.06688959 = weight(_text_:extraction in 3627) [ClassicSimilarity], result of:
          0.06688959 = score(doc=3627,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.25 = coord(3/12)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  14. Weiner, U.: Vor uns die Dokumentenflut oder Automatische Indexierung als notwendige und sinnvolle Ergänzung zur intellektuellen Sacherschließung (2012) 0.02
    0.022881933 = product of:
      0.09152773 = sum of:
        0.0058399485 = weight(_text_:information in 598) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=598,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=598)
        0.06688959 = weight(_text_:extraction in 598) [ClassicSimilarity], result of:
          0.06688959 = score(doc=598,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=598)
        0.018798191 = weight(_text_:system in 598) [ClassicSimilarity], result of:
          0.018798191 = score(doc=598,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.17398985 = fieldWeight in 598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=598)
      0.25 = coord(3/12)
    
    Abstract
    Vor dem Hintergrund veränderter Ansprüche der Bibliotheksbenutzer an Recherchemöglichkeiten - weg vom klassischen Online-Katalog hin zum "One-Stop-Shop" mit Funktionalitäten wie thematisches Browsing, Relevanzranking und dergleichen mehr - einerseits und der notwendigen Bearbeitung von Massendaten (Stichwort Dokumentenflut) andererseits rücken Systeme zur automatischen Indexierung wieder verstärkt in den Mittelpunkt des Interesses. Da in Österreich die Beschäftigung mit diesem Thema im Bibliotheksbereich bislang nur sehr selektiv, bezogen auf wenige konkrete Projekte, erfolgte, wird zuerst ein allgemeiner theoretischer Überblick über die unterschiedlichen Verfahrensansätze der automatischen Indexierung geboten. Im nächsten Schritt werden mit der IDX-basierten Indexierungssoftware MILOS (mit den Teilprojekten MILOS I, MILOS II und KASCADE) und dem modularen System intelligentCAPTURE (mit der integrierten Indexierungssoftware AUTINDEX) die bis vor wenigen Jahren im deutschsprachigen Raum einzigen im Praxiseinsatz befindlichen automatischen Indexierungssysteme vorgestellt. Mit zunehmender Notwendigkeit, neue Wege der inhaltlichen Erschließung zu beschreiten, wurden in den vergangenen 5 - 6 Jahren zahlreiche Softwareentwicklungen auf ihre Einsatzmöglichkeit im Bibliotheksbereich hin getestet. Stellvertretend für diese in Entwicklung befindlichen Systeme zur automatischen inhaltlichen Erschließung wird das Projekt PETRUS, welches in den Jahren 2009 - 2011 an der DNB durchgeführt wurde und die Komponenten PICA Match&Merge sowie die Extraction Platform der Firma Averbis beinhaltet, vorgestellt.
    Footnote
    Wien, Univ., Lehrgang Library and Information Studies, Master-Thesis, 2012
  15. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.02
    0.01933819 = product of:
      0.07735276 = sum of:
        0.016517868 = weight(_text_:information in 1952) [ClassicSimilarity], result of:
          0.016517868 = score(doc=1952,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.27429342 = fieldWeight in 1952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1952)
        0.037596382 = weight(_text_:system in 1952) [ClassicSimilarity], result of:
          0.037596382 = score(doc=1952,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3479797 = fieldWeight in 1952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=1952)
        0.023238512 = product of:
          0.046477024 = sum of:
            0.046477024 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
              0.046477024 = score(doc=1952,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.38690117 = fieldWeight in 1952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Date
    16. 8.1998 12:51:22
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.513-517.
    Source
    Proceedings of the 11th annual conference on research and development in information retrieval. Ed.: Y. Chiaramella
  16. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.02
    0.018346088 = product of:
      0.07338435 = sum of:
        0.048941467 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.048941467 = score(doc=2673,freq=6.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.008175928 = weight(_text_:information in 2673) [ClassicSimilarity], result of:
          0.008175928 = score(doc=2673,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.016266957 = product of:
          0.032533914 = sum of:
            0.032533914 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.032533914 = score(doc=2673,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  17. Lepsky, K.; Vorhauer, J.: Lingo - ein open source System für die Automatische Indexierung deutschsprachiger Dokumente (2006) 0.02
    0.017617544 = product of:
      0.07047018 = sum of:
        0.009343918 = weight(_text_:information in 3581) [ClassicSimilarity], result of:
          0.009343918 = score(doc=3581,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1551638 = fieldWeight in 3581, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3581)
        0.04253545 = weight(_text_:system in 3581) [ClassicSimilarity], result of:
          0.04253545 = score(doc=3581,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.3936941 = fieldWeight in 3581, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3581)
        0.01859081 = product of:
          0.03718162 = sum of:
            0.03718162 = weight(_text_:22 in 3581) [ClassicSimilarity], result of:
              0.03718162 = score(doc=3581,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.30952093 = fieldWeight in 3581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3581)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Lingo ist ein frei verfügbares System (open source) zur automatischen Indexierung der deutschen Sprache. Bei der Entwicklung von lingo standen hohe Konfigurierbarkeit und Flexibilität des Systems für unterschiedliche Einsatzmöglichkeiten im Vordergrund. Der Beitrag zeigt den Nutzen einer linguistisch basierten automatischen Indexierung für das Information Retrieval auf. Die für eine Retrievalverbesserung zur Verfügung stehende linguistische Funktionalität von lingo wird vorgestellt und an Beispielen erläutert: Grundformerkennung, Kompositumerkennung bzw. Kompositumzerlegung, Wortrelationierung, lexikalische und algorithmische Mehrwortgruppenerkennung, OCR-Fehlerkorrektur. Der offene Systemaufbau von lingo wird beschrieben, mögliche Einsatzszenarien und Anwendungsgrenzen werden benannt.
    Date
    24. 3.2006 12:22:02
  18. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.02
    0.01714252 = product of:
      0.10285511 = sum of:
        0.008258934 = weight(_text_:information in 63) [ClassicSimilarity], result of:
          0.008258934 = score(doc=63,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 63, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
        0.09459618 = weight(_text_:extraction in 63) [ClassicSimilarity], result of:
          0.09459618 = score(doc=63,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.46415195 = fieldWeight in 63, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
      0.16666667 = coord(2/12)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.32-45
  19. Koch, T.: Experiments with automatic classification of WAIS databases and indexing of WWW : some results from the Nordic WAIS/WWW project (1994) 0.02
    0.016534086 = product of:
      0.066136345 = sum of:
        0.02825637 = weight(_text_:web in 7209) [ClassicSimilarity], result of:
          0.02825637 = score(doc=7209,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.25239927 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.0115625085 = weight(_text_:information in 7209) [ClassicSimilarity], result of:
          0.0115625085 = score(doc=7209,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1920054 = fieldWeight in 7209, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.026317468 = weight(_text_:system in 7209) [ClassicSimilarity], result of:
          0.026317468 = score(doc=7209,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.2435858 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
      0.25 = coord(3/12)
    
    Abstract
    The Nordic WAIS/WWW project sponsored by NORDINFO is a joint project between Lund University Library and the National Technological Library of Denmark. It aims to improve the existing networked information discovery and retrieval tools Wide Area Information System (WAIS) and World Wide Web (WWW), and to move towards unifying WWW and WAIS. Details current results focusing on the WAIS side of the project. Describes research into automatic indexing and classification of WAIS sources, development of an orientation tool for WAIS, and development of a WAIS index of WWW resources
  20. Bordoni, L.; Pazienza, M.T.: Documents automatic indexing in an environmental domain (1997) 0.02
    0.016261995 = product of:
      0.06504798 = sum of:
        0.0115625085 = weight(_text_:information in 530) [ClassicSimilarity], result of:
          0.0115625085 = score(doc=530,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1920054 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
        0.03721852 = weight(_text_:system in 530) [ClassicSimilarity], result of:
          0.03721852 = score(doc=530,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.34448233 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
        0.016266957 = product of:
          0.032533914 = sum of:
            0.032533914 = weight(_text_:22 in 530) [ClassicSimilarity], result of:
              0.032533914 = score(doc=530,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.2708308 = fieldWeight in 530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=530)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Describes an application of Natural Language Processing (NLP) techniques, in HIRMA (Hypertextual Information Retrieval Managed by ARIOSTO), to the problem of document indexing by referring to a system which incorporates natural language processing techniques to determine the subject of the text of documents and to associate them with relevant semantic indexes. Describes briefly the overall system, details of its implementation on a corpus of scientific abstracts related to environmental topics and experimental evidence of the system's behaviour. Analyzes in detail an experiment designed to evaluate the system's retrieval ability in terms of recall and precision
    Source
    International forum on information and documentation. 22(1997) no.1, S.17-28

Languages

Types

  • a 250
  • el 19
  • x 17
  • m 14
  • s 8
  • d 1
  • p 1
  • More… Less…