Search (70 results, page 1 of 4)

  • × theme_ss:"Theorie verbaler Dokumentationssprachen"
  1. Hudon, M.: ¬A preliminary investigation of the usefulness of semantic relations and of standardized definitions for the purpose of specifying meaning in a thesaurus (1998) 0.03
    0.025979813 = product of:
      0.077939436 = sum of:
        0.012707461 = weight(_text_:information in 55) [ClassicSimilarity], result of:
          0.012707461 = score(doc=55,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16457605 = fieldWeight in 55, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
        0.06523198 = weight(_text_:networks in 55) [ClassicSimilarity], result of:
          0.06523198 = score(doc=55,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 55, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=55)
      0.33333334 = coord(2/6)
    
    Abstract
    The terminological consistency of indexers working with a thesaurus as indexing aid remains low. This suggests that indexers cannot perceive easily or very clearly the meaning of each descriptor available as index term. This paper presents the background nd some of the findings of a small scale experiment designed to study the effect on interindexer terminological consistency of modifying the nature of the semantic information given with descriptors in a thesaurus. The study also provided some insights into the respective usefulness of standardized definitions and of traditional networks of hierarchical and associative relationships as means of providing essential meaning information in the thesaurus used as indexing aid
  2. Dextre Clarke, S.G.; Gilchrist, A.; Will, L.: Revision and extension of thesaurus standards (2004) 0.02
    0.024494004 = product of:
      0.07348201 = sum of:
        0.011980709 = weight(_text_:information in 2615) [ClassicSimilarity], result of:
          0.011980709 = score(doc=2615,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1551638 = fieldWeight in 2615, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
        0.0615013 = weight(_text_:networks in 2615) [ClassicSimilarity], result of:
          0.0615013 = score(doc=2615,freq=4.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.29562 = fieldWeight in 2615, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2615)
      0.33333334 = coord(2/6)
    
    Abstract
    The current standards for monolingual and multilingual thesauri are long overdue for an update. This applies to the international standards ISO 2788 and ISO 5964, as well as the corresponding national standards in several countries and the American standard ANSI/NISO Z39.19. Work is now under way in the UK and in the USA to revise and extend the standards, with particular emphasis on interoperability needs in our world of vast electronic networks. Work in the UK is starting with the British Standards, in the hope of leading on to one international standard to serve all. Some of the issues still under discussion include the treatment of facet analysis, coverage of additional types of controlled vocabulary such as classification schemes, taxonomies and ontologies, and mapping from one vocabulary to another. 1. Are thesaurus standards still needed? Since the 1960s, even before the renowned Cranfield experiments (Cleverdon et al., 1966; Cleverdon, 1967) arguments have raged over the usefulness or otherwise of controlled vocabularies. The case has never been proved definitively one way or the other. At the same time, a recognition has become widespread that no one search method can answer all retrieval requirements. In today's environment of very large networks of resources, the skilled information professional uses a range of techniques. Among these, controlled vocabularies are valued alongside others. The first international standard for monolingual thesauri was issued in 1974. In those days, the main application was for postcoordinate indexing and retrieval from document collections or bibliographic databases. For many information professionals the only practicable alternative to a thesaurus was a classification scheme. And so the thesaurus developed a strong following. After computer systems with full text search capability became widely available, however, the arguments against controlled vocabularies gained more followers. The cost of building and maintaining a thesaurus or a classification scheme was a strong disincentive. Today's databases are typically immense compared with those three decades ago. Full text searching is taken for granted, not just in discrete databases but across all the resources in an intranet or even the Internet. But intranets have brought particular frustration as users discover that despite all the computer power, they cannot find items which they know to be present an the network. So the trend against controlled vocabularies is now being reversed, as many information professionals are turning to them for help. Standards to guide them are still in demand.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  3. Maniez, J.: Fusion de banques de donnees documentaires at compatibilite des languages d'indexation (1997) 0.01
    0.011147052 = product of:
      0.033441156 = sum of:
        0.015563398 = weight(_text_:information in 2246) [ClassicSimilarity], result of:
          0.015563398 = score(doc=2246,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.20156369 = fieldWeight in 2246, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2246)
        0.017877758 = product of:
          0.035755515 = sum of:
            0.035755515 = weight(_text_:22 in 2246) [ClassicSimilarity], result of:
              0.035755515 = score(doc=2246,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.23214069 = fieldWeight in 2246, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2246)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Discusses the apparently unattainable goal of compatibility of information languages. While controlled languages can improve retrieval performance within a single system, they make cooperation across different systems more difficult. The Internet and downloading accentuate this adverse outcome and the acceleration of data exchange aggravates the problem of compatibility. Defines this familiar concept and demonstrates that coherence is just as necessary as it was for indexing languages, the proliferation of which has created confusion in grouped data banks. Describes 2 types of potential solutions, similar to those applied to automatic translation of natural languages: - harmonizing the information languages themselves, both difficult and expensive, or, the more flexible solution involving automatic harmonization of indexing formulae based on pre established concordance tables. However, structural incompatibilities between post coordinated languages and classifications may lead any harmonization tools up a blind alley, while the paths of a universal concordance model are rare and narrow
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: Integration of information data banks and compatibility of indexing languages
  4. Dextre Clarke, S.G.: Thesaural relationships (2001) 0.01
    0.010446835 = product of:
      0.031340506 = sum of:
        0.010483121 = weight(_text_:information in 1149) [ClassicSimilarity], result of:
          0.010483121 = score(doc=1149,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13576832 = fieldWeight in 1149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1149)
        0.020857384 = product of:
          0.04171477 = sum of:
            0.04171477 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.04171477 = score(doc=1149,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.2708308 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 9.2007 15:45:57
    Series
    Information science and knowledge management; vol.2
  5. Boteram, F.: Semantische Relationen in Dokumentationssprachen : vom Thesaurus zum semantischen Netz (2010) 0.01
    0.010446835 = product of:
      0.031340506 = sum of:
        0.010483121 = weight(_text_:information in 4792) [ClassicSimilarity], result of:
          0.010483121 = score(doc=4792,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13576832 = fieldWeight in 4792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4792)
        0.020857384 = product of:
          0.04171477 = sum of:
            0.04171477 = weight(_text_:22 in 4792) [ClassicSimilarity], result of:
              0.04171477 = score(doc=4792,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.2708308 = fieldWeight in 4792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4792)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Moderne Verfahren des Information Retrieval verlangen nach aussagekräftigen und detailliert relationierten Dokumentationssprachen. Der selektive Transfer einzelner Modellierungsstrategien aus dem Bereich semantischer Technologien für die Gestaltung und Relationierung bestehender Dokumentationssprachen wird diskutiert. In Form einer Taxonomie wird ein hierarchisch strukturiertes Relationeninventar definiert, welches sowohl hinreichend allgemeine als auch zahlreiche spezifische Relationstypen enthält, die eine detaillierte und damit aussagekräftige Relationierung des Vokabulars ermöglichen. Das bringt einen Zugewinn an Übersichtlichkeit und Funktionalität. Im Gegensatz zu anderen Ansätzen und Überlegungen zur Schaffung von Relationeninventaren entwickelt der vorgestellte Vorschlag das Relationeninventar aus der Begriffsmenge eines bestehenden Gegenstandsbereichs heraus.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  6. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.01
    0.0069706044 = product of:
      0.020911813 = sum of:
        0.010483121 = weight(_text_:information in 3644) [ClassicSimilarity], result of:
          0.010483121 = score(doc=3644,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.13576832 = fieldWeight in 3644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3644)
        0.010428692 = product of:
          0.020857384 = sum of:
            0.020857384 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
              0.020857384 = score(doc=3644,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.1354154 = fieldWeight in 3644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Calvin Mooers' work toward the resolution of the problem of ambiguity in indexing went unrecognized for years. At the time he introduced the "descriptor" - a term with a very distinct meaning-indexers were, for the most part, taking index terms directly from the document, without either rationalizing them with context or normalizing them with some kind of classification. It is ironic that Mooers' term came to be attached to the popular but unsophisticated indexing methods which he was trying to root out. Simply expressed, what Mooers did was to take the dictionary definitions of terms and redefine them so clearly that they could not be used in any context except that provided by the new definition. He did, at great pains, construct such meanings for over four hundred words; disambiguation and specificity were sought after and found for these words. He proposed that all indexers adopt this method so that when the index supplied a term, it also supplied the exact meaning for that term as used in the indexed document. The same term used differently in another document would be defined differently and possibly renamed to avoid ambiguity. The disambiguation was achieved by using unabridged dictionaries and other sources of defining terminology. In practice, this tends to produce circularity in definition, that is, word A refers to word B which refers to word C which refers to word A. It was necessary, therefore, to break this chain by creating a new, definitive meaning for each word. Eventually, means such as those used by Austin (q.v.) for PRECIS achieved the same purpose, but by much more complex means than just creating a unique definition of each term. Mooers, however, was probably the first to realize how confusing undefined terminology could be. Early automatic indexers dealt with distinct disciplines and, as long as they did not stray beyond disciplinary boundaries, a quick and dirty keyword approach was satisfactory. The trouble came when attempts were made to make a combined index for two or more distinct disciplines. A number of processes have since been developed, mostly involving tagging of some kind or use of strings. Mooers' solution has rarely been considered seriously and probably would be extremely difficult to apply now because of so much interdisciplinarity. But for a specific, weIl defined field, it is still weIl worth considering. Mooers received training in mathematics and physics from the University of Minnesota and the Massachusetts Institute of Technology. He was the founder of Zator Company, which developed and marketed a coded card information retrieval system, and of Rockford Research, Inc., which engages in research in information science. He is the inventor of the TRAC computer language.
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
  7. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.006952462 = product of:
      0.04171477 = sum of:
        0.04171477 = product of:
          0.08342954 = sum of:
            0.08342954 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08342954 = score(doc=4506,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    8.10.2000 11:52:22
  8. Mikacic, M.: Statistical system for subject designation (SSSD) for libraries in Croatia (1996) 0.01
    0.0056184377 = product of:
      0.033710625 = sum of:
        0.033710625 = product of:
          0.06742125 = sum of:
            0.06742125 = weight(_text_:22 in 2943) [ClassicSimilarity], result of:
              0.06742125 = score(doc=2943,freq=4.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.4377287 = fieldWeight in 2943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.2006 14:22:21
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.77-93
  9. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.00
    0.0049660443 = product of:
      0.029796265 = sum of:
        0.029796265 = product of:
          0.05959253 = sum of:
            0.05959253 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.05959253 = score(doc=6089,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Pages
    S.11-22
  10. Farradane, J.: Concept organization for information retrieval (1967) 0.00
    0.0049417904 = product of:
      0.029650742 = sum of:
        0.029650742 = weight(_text_:information in 35) [ClassicSimilarity], result of:
          0.029650742 = score(doc=35,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3840108 = fieldWeight in 35, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=35)
      0.16666667 = coord(1/6)
    
    Source
    Information storage and retrieval. 3(1967) S.297-314
  11. Fugmann, R.: ¬The analytico-synthetic foundation for large indexing & information retrieval systems : dedicated to Prof. Dr. Werner Schultheis, the vigorous initiator of modern chem. documentation in Germany on the occasion of his 85th birthday (1983) 0.00
    0.0044649467 = product of:
      0.02678968 = sum of:
        0.02678968 = weight(_text_:information in 215) [ClassicSimilarity], result of:
          0.02678968 = score(doc=215,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3469568 = fieldWeight in 215, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=215)
      0.16666667 = coord(1/6)
    
    LCSH
    Information retrieval
    RSWK
    Information und Dokumentation / Systemgrundlage (BVB)
    Subject
    Information und Dokumentation / Systemgrundlage (BVB)
    Information retrieval
  12. Maniez, J.: Actualité des langages documentaires : fondements théoriques de la recherche d'information (2002) 0.00
    0.0042358204 = product of:
      0.025414921 = sum of:
        0.025414921 = weight(_text_:information in 887) [ClassicSimilarity], result of:
          0.025414921 = score(doc=887,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3291521 = fieldWeight in 887, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=887)
      0.16666667 = coord(1/6)
    
    Footnote
    Übers. d. Titels: Actuality of information languages: theoretical foundation of information retrieval
  13. Fox, E.A.: Lexical relations : enhancing effectiveness of information retrieval systems (1980) 0.00
    0.00399357 = product of:
      0.023961417 = sum of:
        0.023961417 = weight(_text_:information in 5310) [ClassicSimilarity], result of:
          0.023961417 = score(doc=5310,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3103276 = fieldWeight in 5310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=5310)
      0.16666667 = coord(1/6)
    
  14. Dietze, J.: Informationsrecherchesprache und deren Lexik : Bemerkungen zur Terminologiediskussion (1980) 0.00
    0.0036683278 = product of:
      0.022009967 = sum of:
        0.022009967 = weight(_text_:information in 32) [ClassicSimilarity], result of:
          0.022009967 = score(doc=32,freq=12.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2850541 = fieldWeight in 32, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=32)
      0.16666667 = coord(1/6)
    
    Abstract
    Information research consists of the comparison of 2 sources of information - that of formal description and content analysis and that based on the needs of the user. Information research filters identical elements from the sources by means of document and research cross-sections. Establishing such cross-sections for scientific documents and research questions is made possible by classification. Through the definition of the terms 'class' and 'classification' it becomes clear that the terms 'hierarchic classification' and 'classification' cannot be used synonymously. The basic types of information research languages are both hierarchic and non-hierarchic arising from the structure of lexicology and the paradigmatic relations of the lexicological units. The names for the lexicological units ('descriptor' and 'subject haedings') are synonymous, but it is necessary to differentiate between the terms 'descriptor language' and 'information research thesaurus'. The principles of precoordination and post-coordination as applied to word formation are unsuitable for the typification of information research languages
  15. Zhou, G.D.; Zhang, M.: Extracting relation information from text documents by exploring various types of knowledge (2007) 0.00
    0.0035298502 = product of:
      0.0211791 = sum of:
        0.0211791 = weight(_text_:information in 927) [ClassicSimilarity], result of:
          0.0211791 = score(doc=927,freq=16.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27429342 = fieldWeight in 927, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=927)
      0.16666667 = coord(1/6)
    
    Abstract
    Extracting semantic relationships between entities from text documents is challenging in information extraction and important for deep information processing and management. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using support vector machines. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while current commonly used features from full parsing give limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. This indicates that a cheap and robust solution in relation extraction can be achieved without decreasing too much in performance. We also demonstrate how semantic information such as WordNet, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE benchmark corpora shows that effective incorporation of diverse features enables our system outperform previously best-reported systems. It also shows that our feature-based system significantly outperforms tree kernel-based systems. This suggests that current tree kernels fail to effectively explore structured syntactic information in relation extraction.
    Source
    Information processing and management. 43(2007) no.4, S.969-982
  16. Kobrin, R.Y.: On the principles of terminological work in the creation of thesauri for information retrieval systems (1979) 0.00
    0.0034943735 = product of:
      0.020966241 = sum of:
        0.020966241 = weight(_text_:information in 2954) [ClassicSimilarity], result of:
          0.020966241 = score(doc=2954,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=2954)
      0.16666667 = coord(1/6)
    
  17. Svenonius, E.: Design of controlled vocabularies (1990) 0.00
    0.0034943735 = product of:
      0.020966241 = sum of:
        0.020966241 = weight(_text_:information in 1271) [ClassicSimilarity], result of:
          0.020966241 = score(doc=1271,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 1271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1271)
      0.16666667 = coord(1/6)
    
    Source
    Encyclopedia of library and information science. Vol.45, [=Suppl.10]
  18. Kuhlen, R.: Linguistische Grundlagen (1980) 0.00
    0.0034943735 = product of:
      0.020966241 = sum of:
        0.020966241 = weight(_text_:information in 3829) [ClassicSimilarity], result of:
          0.020966241 = score(doc=3829,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3829)
      0.16666667 = coord(1/6)
    
    Source
    Grundlagen der praktischen Information und Dokumentation: eine Einführung. 2. Aufl
  19. Salton, G.: Experiments in automatic thesaurus construction for information retrieval (1972) 0.00
    0.0034943735 = product of:
      0.020966241 = sum of:
        0.020966241 = weight(_text_:information in 5314) [ClassicSimilarity], result of:
          0.020966241 = score(doc=5314,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 5314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5314)
      0.16666667 = coord(1/6)
    
  20. Miller, U.; Teitelbaum, R.: Pre-coordination and post-coordination : past and future (2002) 0.00
    0.0034943735 = product of:
      0.020966241 = sum of:
        0.020966241 = weight(_text_:information in 1395) [ClassicSimilarity], result of:
          0.020966241 = score(doc=1395,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 1395, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1395)
      0.16666667 = coord(1/6)
    
    Abstract
    This article deals with the meaningful processing of information in relation to two systems of Information processing: pre-coordination and post-coordination. The different approaches are discussed, with emphasis an the need for a controlled vocabulary in information retrieval. Assigned indexing, which employs a controlled vocabulary, is described in detail. Types of indexing language can be divided into two broad groups - those using pre-coordinated terms and those depending an post-coordination. They represent two different basic approaches in processing and Information retrieval. The historical development of these two approaches is described, as well as the two tools that apply to these approaches: thesauri and subject headings.

Languages

  • e 57
  • d 8
  • f 3
  • ja 1
  • nl 1
  • More… Less…

Types

  • a 58
  • m 7
  • s 4
  • el 2
  • r 2
  • d 1
  • x 1
  • More… Less…

Classifications