Search (146 results, page 2 of 8)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"a"
  1. Kim, P.K.: ¬An automatic indexing of compound words based on mutual information for Korean text retrieval (1995) 0.02
    0.021112198 = product of:
      0.052780494 = sum of:
        0.03727935 = weight(_text_:system in 620) [ClassicSimilarity], result of:
          0.03727935 = score(doc=620,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=620)
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 620) [ClassicSimilarity], result of:
              0.04650343 = score(doc=620,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=620)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Presents an automatic indexing technique for compound words suitable for an agglutinative language, specifically Korean. Discusses some construction conditions for compound words and the rules for decomposing compound words to enhance the exhaustivity of indexing, demonstrating that this system, mutual information, enhances both the exhaustivity of indexing and the specifity of terms. Suggests that the construction conditions and rules for decomposition presented may be used in multilingual information retrieval systems to translate the indexing terms of the specific language into those of the language required
    Source
    Library and information science. 1995, no.34, S.29-38
  2. Tsujii, J.-I.: Automatic acquisition of semantic collocation from corpora (1995) 0.02
    0.02105642 = product of:
      0.05264105 = sum of:
        0.03727935 = weight(_text_:system in 4709) [ClassicSimilarity], result of:
          0.03727935 = score(doc=4709,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4709)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 4709) [ClassicSimilarity], result of:
              0.046085097 = score(doc=4709,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 4709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4709)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Proposes automatic linguistic knowledge acquisition from sublanguage corpora. The system combines existing linguistic knowledge and human intervention with corpus based techniques. The algorithm involves a gradual approximation which works to converge linguistic knowledge gradually towards desirable results. The 1st experiment revealed the characteristic of this algorithm and the others proved the effectiveness of this algorithm for a real corpus
    Date
    31. 7.1996 9:22:19
  3. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.02
    0.02105642 = product of:
      0.05264105 = sum of:
        0.03727935 = weight(_text_:system in 6752) [ClassicSimilarity], result of:
          0.03727935 = score(doc=6752,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 6752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.046085097 = score(doc=6752,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  4. Oberhauser, O.; Labner, J.: OPAC-Erweiterung durch automatische Indexierung : Empirische Untersuchung mit Daten aus dem Österreichischen Verbundkatalog (2002) 0.02
    0.01864397 = product of:
      0.09321985 = sum of:
        0.09321985 = weight(_text_:index in 883) [ClassicSimilarity], result of:
          0.09321985 = score(doc=883,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.50173557 = fieldWeight in 883, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=883)
      0.2 = coord(1/5)
    
    Abstract
    In Anlehnung an die in den neunziger Jahren durchgeführten Erschließungsprojekte MILOS I und MILOS II, die die Eignung eines Verfahrens zur automatischen Indexierung für Bibliothekskataloge zum Thema hatten, wurde eine empirische Untersuchung anhand einer repräsentativen Stichprobe von Titelsätzen aus dem Österreichischen Verbundkatalog durchgeführt. Ziel war die Prüfung und Bewertung der Einsatzmöglichkeit dieses Verfahrens in den Online-Katalogen des Verbundes. Der Realsituation der OPAC-Benutzung gemäß wurde ausschließlich die Auswirkung auf den automatisch generierten Begriffen angereicherten Basic Index ("Alle Felder") untersucht. Dazu wurden 100 Suchanfragen zunächst im ursprünglichen Basic Index und sodann im angereicherten Basic Index in einem OPAC unter Aleph 500 durchgeführt. Die Tests erbrachten einen Zuwachs an relevanten Treffern bei nur leichten Verlusten an Precision, eine Reduktion der Nulltreffer-Ergebnisse sowie Aufschlüsse über die Auswirkung einer vorhandenen verbalen Sacherschließung.
  5. Lassalle, E.: Text retrieval : from a monolingual system to a multilingual system (1993) 0.02
    0.018452337 = product of:
      0.09226168 = sum of:
        0.09226168 = weight(_text_:system in 7403) [ClassicSimilarity], result of:
          0.09226168 = score(doc=7403,freq=16.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.68896466 = fieldWeight in 7403, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7403)
      0.2 = coord(1/5)
    
    Abstract
    Describes the TELMI monolingual text retrieval system and its future extension, a multilingual system. TELMI is designed for medium sized databases containing short texts. The characteristics of the system are fine-grained natural language processing (NLP); an open domain and a large scale knowledge base; automated indexing based on conceptual representation of texts and reusability of the NLP tools. Discusses the French MINITEL service, the MGS information service and the TELMI research system covering the full text system; NLP architecture; the lexical level; the syntactic level; the semantic level and an example of the use of a generic system
  6. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.02
    0.017055526 = product of:
      0.042638816 = sum of:
        0.032950602 = weight(_text_:system in 6029) [ClassicSimilarity], result of:
          0.032950602 = score(doc=6029,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 6029, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6029)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 6029) [ClassicSimilarity], result of:
              0.029064644 = score(doc=6029,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 6029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6029)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Over the past few years, temporal information processing and temporal database management have increasingly become hot topics. Nevertheless, only a few researchers have investigated these areas in the Chinese language. This lays down the objective of our research: to exploit Chinese language processing techniques for temporal information extraction and concept reasoning. In this article, we first study the mechanism for expressing time in Chinese. On the basis of the study, we then design a general frame structure for maintaining the extracted temporal concepts and propose a system for extracting time-dependent information from Hong Kong financial news. In the system, temporal knowledge is represented by different types of temporal concepts (TTC) and different temporal relations, including absolute and relative relations, which are used to correlate between action times and reference times. In analyzing a sentence, the algorithm first determines the situation related to the verb. This in turn will identify the type of temporal concept associated with the verb. After that, the relevant temporal information is extracted and the temporal relations are derived. These relations link relevant concept frames together in chronological order, which in turn provide the knowledge to fulfill users' queries, e.g., for question-answering (i.e., Q&A) applications
    Date
    29. 9.2001 14:02:50
  7. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.02
    0.016139356 = product of:
      0.080696784 = sum of:
        0.080696784 = weight(_text_:context in 5816) [ClassicSimilarity], result of:
          0.080696784 = score(doc=5816,freq=8.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.45792344 = fieldWeight in 5816, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.2 = coord(1/5)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
  8. Busch, D.: Domänenspezifische hybride automatische Indexierung von bibliographischen Metadaten (2019) 0.02
    0.015792316 = product of:
      0.039480787 = sum of:
        0.027959513 = weight(_text_:system in 5628) [ClassicSimilarity], result of:
          0.027959513 = score(doc=5628,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 5628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5628)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 5628) [ClassicSimilarity], result of:
              0.03456382 = score(doc=5628,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 5628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5628)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Im Fraunhofer-Informationszentrum Raum und Bau (IRB) wird Fachliteratur im Bereich Planen und Bauen bibliographisch erschlossen. Die daraus resultierenden Dokumente (Metadaten-Einträge) werden u.a. bei der Produktion der bibliographischen Datenbanken des IRB verwendet. In Abb. 1 ist ein Dokument dargestellt, das einen Zeitschriftenartikel beschreibt. Die Dokumente werden mit Deskriptoren von einer Nomenklatur (Schlagwortliste IRB) indexiert. Ein Deskriptor ist "eine Benennung., die für sich allein verwendbar, eindeutig zur Inhaltskennzeichnung geeignet und im betreffenden Dokumentationssystem zugelassen ist". Momentan wird die Indexierung intellektuell von menschlichen Experten durchgeführt. Die intellektuelle Indexierung ist zeitaufwendig und teuer. Eine Lösung des Problems besteht in der automatischen Indexierung, bei der die Zuordnung von Deskriptoren durch ein Computerprogramm erfolgt. Solche Computerprogramme werden im Folgenden auch als Klassifikatoren bezeichnet. In diesem Beitrag geht es um ein System zur automatischen Indexierung von deutschsprachigen Dokumenten im Bereich Bauwesen mit Deskriptoren aus der Schlagwortliste IRB.
    Source
    B.I.T.online. 22(2019) H.6, S.465-469
  9. Witschel, H.F.: Terminology extraction and automatic indexing : comparison and qualitative evaluation of methods (2005) 0.02
    0.015536643 = product of:
      0.07768321 = sum of:
        0.07768321 = weight(_text_:index in 1842) [ClassicSimilarity], result of:
          0.07768321 = score(doc=1842,freq=6.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.418113 = fieldWeight in 1842, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1842)
      0.2 = coord(1/5)
    
    Abstract
    Many terminology engineering processes involve the task of automatic terminology extraction: before the terminology of a given domain can be modelled, organised or standardised, important concepts (or terms) of this domain have to be identified and fed into terminological databases. These serve in further steps as a starting point for compiling dictionaries, thesauri or maybe even terminological ontologies for the domain. For the extraction of the initial concepts, extraction methods are needed that operate on specialised language texts. On the other hand, many machine learning or information retrieval applications require automatic indexing techniques. In Machine Learning applications concerned with the automatic clustering or classification of texts, often feature vectors are needed that describe the contents of a given text briefly but meaningfully. These feature vectors typically consist of a fairly small set of index terms together with weights indicating their importance. Short but meaningful descriptions of document contents as provided by good index terms are also useful to humans: some knowledge management applications (e.g. topic maps) use them as a set of basic concepts (topics). The author believes that the tasks of terminology extraction and automatic indexing have much in common and can thus benefit from the same set of basic algorithms. It is the goal of this paper to outline some methods that may be used in both contexts, but also to find the discriminating factors between the two tasks that call for the variation of parameters or application of different techniques. The discussion of these methods will be based on statistical, syntactical and especially morphological properties of (index) terms. The paper is concluded by the presentation of some qualitative and quantitative results comparing statistical and morphological methods.
  10. Ladewig, C.; Henkes, M.: Verfahren zur automatischen inhaltlichen Erschließung von elektronischen Texten : ASPECTIX (2001) 0.02
    0.015222736 = product of:
      0.07611368 = sum of:
        0.07611368 = weight(_text_:index in 5794) [ClassicSimilarity], result of:
          0.07611368 = score(doc=5794,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.40966535 = fieldWeight in 5794, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5794)
      0.2 = coord(1/5)
    
    Abstract
    Das Verfahren zur automatischen syntaktischen inhaltlichen Erschließung von elektronischen Texten, AspectiX, basiert auf einem Index, dessen Elemente mit einer universellen Aspekt-Klassifikation verknüpft sind, die es erlauben, ein syntaktisches Retrieval durchzuführen. Mit diesen, auf den jeweiligen Suchgegenstand inhaltlich bezogenen Klassifikationselementen, werden die Informationen in elektronischen Texten mit bekannten Suchalgorithmen abgefragt und die Ergebnisse entsprechend der Aspektverknüpfung ausgewertet. Mit diesen Aspekten ist es möglich, unbekannte Textdokumente automatisch fachgebiets- und sprachunabhängig nach Inhalten zu klassifizieren und beim Suchen in einem Textcorpus nicht nur auf die Verwendung von Zeichenfolgen angewiesen zu sein wie bei Suchmaschinen im WWW. Der Index kann bei diesen Vorgängen intellektuell und automatisch weiter ausgebaut werden und liefert Ergebnisse im Retrieval von nahezu 100 Prozent Precision, bei gleichzeitig nahezu 100 Prozent Recall. Damit ist das Verfahren AspectiX allen anderen Recherchetools um bis zu 40 Prozent an Precision bzw. Recall überlegen, wie an zahlreichen Recherchen in drei Datenbanken, die unterschiedlich groß und thematisch unähnlich sind, nachgewiesen wird
  11. Pirkola, A.: Morphological typology of languages for IR (2001) 0.02
    0.015222736 = product of:
      0.07611368 = sum of:
        0.07611368 = weight(_text_:index in 4476) [ClassicSimilarity], result of:
          0.07611368 = score(doc=4476,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.40966535 = fieldWeight in 4476, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=4476)
      0.2 = coord(1/5)
    
    Abstract
    This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of every language in the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular because of the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It studies how the indexes of synthesis and fusion could be used as practical tools in mono- and cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies made in different languages on the effects of morphology and stemming in IR.
  12. Bloomfield, M.: Indexing : neglected and poorly understood (2001) 0.02
    0.015222736 = product of:
      0.07611368 = sum of:
        0.07611368 = weight(_text_:index in 5439) [ClassicSimilarity], result of:
          0.07611368 = score(doc=5439,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.40966535 = fieldWeight in 5439, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5439)
      0.2 = coord(1/5)
    
    Abstract
    The growth of the Internet has highlighted the use of machine indexing. The difficulties in using the Internet as a searching device can be frustrating. The use of the term "Python" is given as an example. Machine indexing is noted as "rotten" and human indexing as "capricious." The problem seems to be a lack of a theoretical foundation for the art of indexing. What librarians have learned over the last hundred years has yet to yield a consistent approach to what really works best in preparing index terms and in the ability of our customers to search the various indexes. An attempt is made to consider the elements of indexing, their pros and cons. The argument is made that machine indexing is far too prolific in its production of index terms. Neither librarians nor computer programmers have made much progress to improve Internet indexing. Human indexing has had the same problems for over fifty years.
  13. Asula, M.; Makke, J.; Freienthal, L.; Kuulmets, H.-A.; Sirel, R.: Kratt: developing an automatic subject indexing tool for the National Library of Estonia : how to transfer metadata information among work cluster members (2021) 0.02
    0.015222736 = product of:
      0.07611368 = sum of:
        0.07611368 = weight(_text_:index in 723) [ClassicSimilarity], result of:
          0.07611368 = score(doc=723,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.40966535 = fieldWeight in 723, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=723)
      0.2 = coord(1/5)
    
    Abstract
    Manual subject indexing in libraries is a time-consuming and costly process and the quality of the assigned subjects is affected by the cataloger's knowledge on the specific topics contained in the book. Trying to solve these issues, we exploited the opportunities arising from artificial intelligence to develop Kratt: a prototype of an automatic subject indexing tool. Kratt is able to subject index a book independent of its extent and genre with a set of keywords present in the Estonian Subject Thesaurus. It takes Kratt approximately one minute to subject index a book, outperforming humans 10-15 times. Although the resulting keywords were not considered satisfactory by the catalogers, the ratings of a small sample of regular library users showed more promise. We also argue that the results can be enhanced by including a bigger corpus for training the model and applying more careful preprocessing techniques.
  14. Dattola, R.T.: FIRST: Flexible information retrieval system for text (1979) 0.01
    0.01491174 = product of:
      0.0745587 = sum of:
        0.0745587 = weight(_text_:system in 5172) [ClassicSimilarity], result of:
          0.0745587 = score(doc=5172,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 5172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.125 = fieldNorm(doc=5172)
      0.2 = coord(1/5)
    
  15. Malone, L.C.; Driscoll, J.R.; Pepe, J.W.: Modeling the performance of an automated keywording system (1991) 0.01
    0.01491174 = product of:
      0.0745587 = sum of:
        0.0745587 = weight(_text_:system in 6682) [ClassicSimilarity], result of:
          0.0745587 = score(doc=6682,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 6682, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=6682)
      0.2 = coord(1/5)
    
    Abstract
    Presents a model for predicting the performance of a computerised keyword assigning and indexing system. Statistical procedures were investigated in order to protect against incorrect keywording by the system behaving as an expert system designed to mimic the behaviour of human keyword indexers and representing lessons learned from military exercises and operations
  16. Malone, L.C.; Wildman-Pepe, J.; Driscoll, J.R.: Evaluation of an automated keywording system (1990) 0.01
    0.014794785 = product of:
      0.073973924 = sum of:
        0.073973924 = weight(_text_:system in 4999) [ClassicSimilarity], result of:
          0.073973924 = score(doc=4999,freq=14.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5524007 = fieldWeight in 4999, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4999)
      0.2 = coord(1/5)
    
    Abstract
    An automated keywording system has been designed ro artifically behave as a human "expert" indexer. The system was designed to keyword 100 to 800 word documents representing lessons learned from military exercises and operations. A set of 74 documents can be keyworded on an IBM PS/2 model 80 in about five minutes. This paper presents a variety of ways for statistical documenting improvements in the development of an automated keywording system over time. It is not only beneficial to have some measure of system performance for a given time, but it is also useful as attemps are made to improve a system to assess if actual statistically significant improvements have been made. Furthermore, it is useful to identify the source of any existing problems so that they can be rectified. The specifics of the automated system that was evaluated are described, and the performance measures used are discussed.
  17. Faraj, N.: Analyse d'une methode d'indexation automatique basée sur une analyse syntaxique de texte (1996) 0.01
    0.014352133 = product of:
      0.07176066 = sum of:
        0.07176066 = weight(_text_:index in 685) [ClassicSimilarity], result of:
          0.07176066 = score(doc=685,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 685, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=685)
      0.2 = coord(1/5)
    
    Abstract
    Evaluates an automatic indexing method based on syntactical text analysis combined with statistical analysis. Tests many combinations for the choice of term categories and weighting methods. The experiment, conducted on a software engineering corpus, shows systematic improvement in the use of syntactic term phrases compared to using only individual words as index terms
  18. Garfield, E.: ¬The relationship between mechanical indexing, structural linguistics and information retrieval (1992) 0.01
    0.014352133 = product of:
      0.07176066 = sum of:
        0.07176066 = weight(_text_:index in 3632) [ClassicSimilarity], result of:
          0.07176066 = score(doc=3632,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 3632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=3632)
      0.2 = coord(1/5)
    
    Abstract
    It is possible to locate over 60% of indexing terms used in the Current List of Medical Literature by analysing the titles of the articles. Citation indexes contain 'noise' and lack many pertinent citations. Mechanical indexing or analysis of text must begin with some linguistic technique. Discusses Harris' methods of structural linguistics, discourse analysis and transformational analysis. Provides 3 examples with references, abstracts and index entries
  19. Harman, D.: Automatic indexing (1994) 0.01
    0.014352133 = product of:
      0.07176066 = sum of:
        0.07176066 = weight(_text_:index in 7729) [ClassicSimilarity], result of:
          0.07176066 = score(doc=7729,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 7729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=7729)
      0.2 = coord(1/5)
    
    Content
    Enthält die Abschnitte: What constitutes a record; What constitutes a word and what 'words' to index; Use of stop lists; Use of suffixing or stemming; Advanced automatic indexing techniques (term weighting, query expansion, the use of multiple-word phrases for indexing)
  20. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.01
    0.013616532 = product of:
      0.03404133 = sum of:
        0.02636048 = weight(_text_:system in 1441) [ClassicSimilarity], result of:
          0.02636048 = score(doc=1441,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.19684705 = fieldWeight in 1441, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.0076808496 = product of:
          0.023042548 = sum of:
            0.023042548 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.023042548 = score(doc=1441,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik

Years

Languages