Search (284 results, page 2 of 15)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.02
    0.016370926 = product of:
      0.065483704 = sum of:
        0.039495744 = sum of:
          0.017573725 = weight(_text_:system in 103) [ClassicSimilarity], result of:
            0.017573725 = score(doc=103,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.17398985 = fieldWeight in 103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=103)
          0.021922018 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
            0.021922018 = score(doc=103,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.19432661 = fieldWeight in 103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=103)
        0.02598796 = product of:
          0.05197592 = sum of:
            0.05197592 = weight(_text_:etc in 103) [ClassicSimilarity], result of:
              0.05197592 = score(doc=103,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2992217 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  2. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.016243655 = product of:
      0.06497462 = sum of:
        0.038904842 = weight(_text_:retrieval in 4483) [ClassicSimilarity], result of:
          0.038904842 = score(doc=4483,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.40105087 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.026069777 = product of:
          0.052139554 = sum of:
            0.052139554 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.052139554 = score(doc=4483,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    15. 3.2000 10:22:37
  3. Rau, L.F.; Jacobs, P.S.; Zernik, U.: Information extraction and text summarization using linguistic knowledge acquisition (1989) 0.02
    0.01620146 = product of:
      0.06480584 = sum of:
        0.044923443 = weight(_text_:retrieval in 6683) [ClassicSimilarity], result of:
          0.044923443 = score(doc=6683,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.46309367 = fieldWeight in 6683, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6683)
        0.019882401 = product of:
          0.039764803 = sum of:
            0.039764803 = weight(_text_:system in 6683) [ClassicSimilarity], result of:
              0.039764803 = score(doc=6683,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.3936941 = fieldWeight in 6683, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Storing and accessing texts in a conceptual format has a number of advantages over traditional document retrieval methods. A conceptual format facilitates natural language access to text information. It can support imprecise and inexact queries, conceptual information summarisation, and, ultimately, document translation. Describes 2 methods which have been implemented in a prototype intelligent information retrieval system calles SCISOR (System for Conceptual Information Summarisation, Organization and Retrieval). Describes the text processing, language acquisition, and summarisation components of SCISOR
  4. Hsinchun, C.: Knowledge-based document retrieval framework and design (1992) 0.02
    0.01620146 = product of:
      0.06480584 = sum of:
        0.044923443 = weight(_text_:retrieval in 6686) [ClassicSimilarity], result of:
          0.044923443 = score(doc=6686,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.46309367 = fieldWeight in 6686, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6686)
        0.019882401 = product of:
          0.039764803 = sum of:
            0.039764803 = weight(_text_:system in 6686) [ClassicSimilarity], result of:
              0.039764803 = score(doc=6686,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.3936941 = fieldWeight in 6686, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6686)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Presents research on the design of knowledge-based document retrieval systems in which a semantic network was adopted to represent subject knowledge and classification scheme knowledge and experts' search strategies and user modelling capability were modelled as procedural knowledge. These functionalities were incorporated into a prototype knowledge-based retrieval system, Metacat. Describes a system, the design of which was based on the blackboard architecture, which was able to create a user profile, identify task requirements, suggest heuristics-based search strategies, perform semantic-based search assistance, and assist online query refinement
  5. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.02
    0.016136829 = product of:
      0.043031543 = sum of:
        0.019452421 = weight(_text_:retrieval in 4436) [ClassicSimilarity], result of:
          0.019452421 = score(doc=4436,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 4436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 4436) [ClassicSimilarity], result of:
              0.021088472 = score(doc=4436,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
        0.013034889 = product of:
          0.026069777 = sum of:
            0.026069777 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.026069777 = score(doc=4436,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
  6. Rosemblat, G.; Tse, T.; Gemoets, D.: Adapting a monolingual consumer health system for Spanish cross-language information retrieval (2004) 0.02
    0.015605161 = product of:
      0.062420644 = sum of:
        0.022924898 = weight(_text_:retrieval in 2673) [ClassicSimilarity], result of:
          0.022924898 = score(doc=2673,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.23632148 = fieldWeight in 2673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2673)
        0.039495744 = sum of:
          0.017573725 = weight(_text_:system in 2673) [ClassicSimilarity], result of:
            0.017573725 = score(doc=2673,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.17398985 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2673)
          0.021922018 = weight(_text_:29 in 2673) [ClassicSimilarity], result of:
            0.021922018 = score(doc=2673,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.19432661 = fieldWeight in 2673, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2673)
      0.25 = coord(2/8)
    
    Abstract
    This preliminary study applies a bilingual term list (BTL) approach to cross-language information retrieval (CLIR) in the consumer health domain and compares it to a machine translation (MT) approach. We compiled a Spanish-English BTL of 34,980 medical and general terms. We collected a training set of 466 general health queries from MedlinePlus en espaiiol and 488 domainspecific queries from ClinicalTrials.gov translated into Spanish. We submitted the training set queries in English against a test bed of 7,170 ClinicalTrials.gov English documents, and compared MT and BTL against this English monolingual standard. The BTL approach was less effective (F = 0.420) than the MT approach (F = 0.578). A failure analysis of the results led to substitution of BTL dictionary sources and the addition of rudimentary normalisation of plural forms. These changes improved the CLIR effectiveness of the same training set queries (F = 0.474), and yielded comparable results for a test set of new 954 queries (F= 0.484). These results will shape our efforts to support Spanishspeakers' needs for consumer health information currently only available in English.
    Date
    29. 8.2004 19:12:06
  7. Hess, M.: ¬An incrementally extensible document retrieval system based on linguistic and logical principles (1992) 0.02
    0.015440023 = product of:
      0.06176009 = sum of:
        0.043496937 = weight(_text_:retrieval in 2413) [ClassicSimilarity], result of:
          0.043496937 = score(doc=2413,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.44838852 = fieldWeight in 2413, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2413)
        0.018263152 = product of:
          0.036526304 = sum of:
            0.036526304 = weight(_text_:system in 2413) [ClassicSimilarity], result of:
              0.036526304 = score(doc=2413,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.36163113 = fieldWeight in 2413, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2413)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Most natural language based document retrieval systems use the syntax structures of constituent phrases of documents as index terms. Many of these systems also attempt to reduce the syntactic variability of natural language by some normalisation procedure applied to these syntax structures. However, the retrieval performance of such systems remains fairly disappointing. Some systems therefore use a meaning representation language to index and retrieve documents. In this paper, a system is presented that uses Horn Clause Logic as meaning representation language, employs advanced techniques from Natural Language Processing to achieve incremental extensibility, and uses methods from Logic Programming to achieve robustness in the face of insufficient data. An Incrementally Extensible Document Retrieval System Based on Linguistic and Logical Principles.
    Source
    SIGIR '92: Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
  8. Chen, K.-H.: Evaluating Chinese text retrieval with multilingual queries (2002) 0.02
    0.015183598 = product of:
      0.06073439 = sum of:
        0.04538898 = weight(_text_:retrieval in 1851) [ClassicSimilarity], result of:
          0.04538898 = score(doc=1851,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.46789268 = fieldWeight in 1851, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1851)
        0.015345411 = product of:
          0.030690823 = sum of:
            0.030690823 = weight(_text_:29 in 1851) [ClassicSimilarity], result of:
              0.030690823 = score(doc=1851,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27205724 = fieldWeight in 1851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1851)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper reports the design of a Chinese test collection with multilingual queries and the application of this test collection to evaluate information retrieval Systems. The effective indexing units, IR models, translation techniques, and query expansion for Chinese text retrieval are identified. The collaboration of East Asian countries for construction of test collections for cross-language multilingual text retrieval is also discussed in this paper. As well, a tool is designed to help assessors judge relevante and gather the events of relevante judgment. The log file created by this tool will be used to analyze the behaviors of assessors in the future.
    Source
    Knowledge organization. 29(2002) nos.3/4, S.156-170
  9. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.01
    0.014712609 = product of:
      0.039233625 = sum of:
        0.0097262105 = weight(_text_:retrieval in 691) [ClassicSimilarity], result of:
          0.0097262105 = score(doc=691,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.10026272 = fieldWeight in 691, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=691)
        0.0074559003 = product of:
          0.014911801 = sum of:
            0.014911801 = weight(_text_:system in 691) [ClassicSimilarity], result of:
              0.014911801 = score(doc=691,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.14763528 = fieldWeight in 691, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.5 = coord(1/2)
        0.022051515 = product of:
          0.04410303 = sum of:
            0.04410303 = weight(_text_:etc in 691) [ClassicSimilarity], result of:
              0.04410303 = score(doc=691,freq=4.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.25389802 = fieldWeight in 691, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Computer scientists create models of a perceived reality. Through AI techniques, these models aim at providing the basic support for emulating cognitive behavior such as reasoning and learning, which is one of the main goals of the Al research effort. Such computer models are formed through the interaction of various acquisition and inference mechanisms: perception, concept learning, conceptual clustering, hypothesis testing, probabilistic inference, etc., and are represented using different paradigms tightly linked to the processes that use them. Among these paradigms let us cite: biological models (neural nets, genetic programming), logic-based models (first-order logic, modal logic, rule-based systems), virtual reality models (object systems, agent systems), probabilistic models (Bayesian nets, fuzzy logic), linguistic models (conceptual dependency graphs, language-based rep resentations), etc. One of the strengths of the Conceptual Graph (CG) theory is its versatility in terms of the representation paradigms under which it falls. It can be viewed and therefore used, under different representation paradigms, which makes it a popular choice for a wealth of applications. Its full coupling with different cognitive processes lead to the opening of the field toward related research communities such as the Description Logic, Formal Concept Analysis, and Computational Linguistic communities. We now see more and more research results from one community enrich the other, laying the foundations of common philosophical grounds from which a successful synergy can emerge. ICCS 2000 embodies this spirit of research collaboration. It presents a set of papers that we believe, by their exposure, will benefit the whole community. For instance, the technical program proposes tracks on Conceptual Ontologies, Language, Formal Concept Analysis, Computational Aspects of Conceptual Structures, and Formal Semantics, with some papers on pragmatism and human related aspects of computing. Never before was the program of ICCS formed by so heterogeneously rooted theories of knowledge representation and use. We hope that this swirl of ideas will benefit you as much as it already has benefited us while putting together this program
    Content
    Concepts and Language: The Role of Conceptual Structure in Human Evolution (Keith Devlin) - Concepts in Linguistics - Concepts in Natural Language (Gisela Harras) - Patterns, Schemata, and Types: Author Support through Formalized Experience (Felix H. Gatzemeier) - Conventions and Notations for Knowledge Representation and Retrieval (Philippe Martin) - Conceptual Ontology: Ontology, Metadata, and Semiotics (John F. Sowa) - Pragmatically Yours (Mary Keeler) - Conceptual Modeling for Distributed Ontology Environments (Deborah L. McGuinness) - Discovery of Class Relations in Exception Structured Knowledge Bases (Hendra Suryanto, Paul Compton) - Conceptual Graphs: Perspectives: CGs Applications: Where Are We 7 Years after the First ICCS ? (Michel Chein, David Genest) - The Engineering of a CC-Based System: Fundamental Issues (Guy W. Mineau) - Conceptual Graphs, Metamodeling, and Notation of Concepts (Olivier Gerbé, Guy W. Mineau, Rudolf K. Keller) - Knowledge Representation and Reasonings: Based on Graph Homomorphism (Marie-Laure Mugnier) - User Modeling Using Conceptual Graphs for Intelligent Agents (James F. Baldwin, Trevor P. Martin, Aimilia Tzanavari) - Towards a Unified Querying System of Both Structured and Semi-structured Imprecise Data Using Fuzzy View (Patrice Buche, Ollivier Haemmerlé) - Formal Semantics of Conceptual Structures: The Extensional Semantics of the Conceptual Graph Formalism (Guy W. Mineau) - Semantics of Attribute Relations in Conceptual Graphs (Pavel Kocura) - Nested Concept Graphs and Triadic Power Context Families (Susanne Prediger) - Negations in Simple Concept Graphs (Frithjof Dau) - Extending the CG Model by Simulations (Jean-François Baget) - Contextual Logic and Formal Concept Analysis: Building and Structuring Description Logic Knowledge Bases: Using Least Common Subsumers and Concept Analysis (Franz Baader, Ralf Molitor) - On the Contextual Logic of Ordinal Data (Silke Pollandt, Rudolf Wille) - Boolean Concept Logic (Rudolf Wille) - Lattices of Triadic Concept Graphs (Bernd Groh, Rudolf Wille) - Formalizing Hypotheses with Concepts (Bernhard Ganter, Sergei 0. Kuznetsov) - Generalized Formal Concept Analysis (Laurent Chaudron, Nicolas Maille) - A Logical Generalization of Formal Concept Analysis (Sébastien Ferré, Olivier Ridoux) - On the Treatment of Incomplete Knowledge in Formal Concept Analysis (Peter Burmeister, Richard Holzer) - Conceptual Structures in Practice: Logic-Based Networks: Concept Graphs and Conceptual Structures (Peter W. Eklund) - Conceptual Knowledge Discovery and Data Analysis (Joachim Hereth, Gerd Stumme, Rudolf Wille, Uta Wille) - CEM - A Conceptual Email Manager (Richard Cole, Gerd Stumme) - A Contextual-Logic Extension of TOSCANA (Peter Eklund, Bernd Groh, Gerd Stumme, Rudolf Wille) - A Conceptual Graph Model for W3C Resource Description Framework (Olivier Corby, Rose Dieng, Cédric Hébert) - Computational Aspects of Conceptual Structures: Computing with Conceptual Structures (Bernhard Ganter) - Symmetry and the Computation of Conceptual Structures (Robert Levinson) An Introduction to SNePS 3 (Stuart C. Shapiro) - Composition Norm Dynamics Calculation with Conceptual Graphs (Aldo de Moor) - From PROLOG++ to PROLOG+CG: A CG Object-Oriented Logic Programming Language (Adil Kabbaj, Martin Janta-Polczynski) - A Cost-Bounded Algorithm to Control Events Generalization (Gaël de Chalendar, Brigitte Grau, Olivier Ferret)
  10. Metzler, D.P.; Haas, S.W.; Cosic, C.L.; Wheeler, L.H.: Constituent object parsing for information retrieval and similar text processing problems (1989) 0.01
    0.014140559 = product of:
      0.056562237 = sum of:
        0.036679838 = weight(_text_:retrieval in 2858) [ClassicSimilarity], result of:
          0.036679838 = score(doc=2858,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.37811437 = fieldWeight in 2858, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2858)
        0.019882401 = product of:
          0.039764803 = sum of:
            0.039764803 = weight(_text_:system in 2858) [ClassicSimilarity], result of:
              0.039764803 = score(doc=2858,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.3936941 = fieldWeight in 2858, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2858)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Describes the architecture and functioning of the Constituent Object Parser. This system has been developed specially for text processing applications such as information retrieval, which can benefit from structural comparisons between elements of text such as a query and a potentially relevant abstract. Describes the general way in which this objective influenced the design of the system.
  11. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.01
    0.013757914 = product of:
      0.055031657 = sum of:
        0.037634555 = weight(_text_:retrieval in 6671) [ClassicSimilarity], result of:
          0.037634555 = score(doc=6671,freq=22.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.3879561 = fieldWeight in 6671, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6671)
        0.017397102 = product of:
          0.034794204 = sum of:
            0.034794204 = weight(_text_:system in 6671) [ClassicSimilarity], result of:
              0.034794204 = score(doc=6671,freq=16.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.34448233 = fieldWeight in 6671, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  12. Oard, D.W.; He, D.; Wang, J.: User-assisted query translation for interactive cross-language information retrieval (2008) 0.01
    0.013695264 = product of:
      0.054781057 = sum of:
        0.033692583 = weight(_text_:retrieval in 2030) [ClassicSimilarity], result of:
          0.033692583 = score(doc=2030,freq=6.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.34732026 = fieldWeight in 2030, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2030)
        0.021088472 = product of:
          0.042176943 = sum of:
            0.042176943 = weight(_text_:system in 2030) [ClassicSimilarity], result of:
              0.042176943 = score(doc=2030,freq=8.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.41757566 = fieldWeight in 2030, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2030)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Interactive Cross-Language Information Retrieval (CLIR), a process in which searcher and system collaborate to find documents that satisfy an information need regardless of the language in which those documents are written, calls for designs in which synergies between searcher and system can be leveraged so that the strengths of one can cover weaknesses of the other. This paper describes an approach that employs user-assisted query translation to help searchers better understand the system's operation. Supporting interaction and interface designs are introduced, and results from three user studies are presented. The results indicate that experienced searchers presented with this new system evolve new search strategies that make effective use of the new capabilities, that they achieve retrieval effectiveness comparable to results obtained using fully automatic techniques, and that reported satisfaction with support for cross-language searching increased. The paper concludes with a description of a freely available interactive CLIR system that incorporates lessons learned from this research.
  13. Czejdo. B.D.; Tucci, R.P.: ¬A dataflow graphical language for database applications (1994) 0.01
    0.01358568 = product of:
      0.05434272 = sum of:
        0.032420702 = weight(_text_:retrieval in 559) [ClassicSimilarity], result of:
          0.032420702 = score(doc=559,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=559)
        0.021922018 = product of:
          0.043844037 = sum of:
            0.043844037 = weight(_text_:29 in 559) [ClassicSimilarity], result of:
              0.043844037 = score(doc=559,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38865322 = fieldWeight in 559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Discusses a graphical language for information retrieval and processing. A lot of recent activity has occured in the area of improving access to database systems. However, current results are restricted to simple interfacing of database systems. Proposes a graphical language for specifying complex applications
    Date
    20.10.2000 13:29:46
  14. Kreymer, O.: ¬An evaluation of help mechanisms in natural language information retrieval systems (2002) 0.01
    0.013510293 = product of:
      0.054041173 = sum of:
        0.043496937 = weight(_text_:retrieval in 2557) [ClassicSimilarity], result of:
          0.043496937 = score(doc=2557,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.44838852 = fieldWeight in 2557, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2557)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 2557) [ClassicSimilarity], result of:
              0.021088472 = score(doc=2557,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 2557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2557)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The field of natural language processing (NLP) demonstrates rapid changes in the design of information retrieval systems and human-computer interaction. While natural language is being looked on as the most effective tool for information retrieval in a contemporary information environment, the systems using it are only beginning to emerge. This study attempts to evaluate the current state of NLP information retrieval systems from the user's point of view: what techniques are used by these systems to guide their users through the search process? The analysis focused on the structure and components of the systems' help mechanisms. Results of the study demonstrated that systems which claimed to be using natural language searching in fact used a wide range of information retrieval techniques from real natural language processing to Boolean searching. As a result, the user assistance mechanisms of these systems also varied. While pseudo-NLP systems would suit a more traditional method of instruction, real NLP systems primarily utilised the methods of explanation and user-system dialogue.
  15. Greengrass, M.: Conflation methods for searching databases of Latin text (1996) 0.01
    0.013350466 = product of:
      0.053401865 = sum of:
        0.032094855 = weight(_text_:retrieval in 6987) [ClassicSimilarity], result of:
          0.032094855 = score(doc=6987,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 6987, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6987)
        0.021307012 = product of:
          0.042614024 = sum of:
            0.042614024 = weight(_text_:system in 6987) [ClassicSimilarity], result of:
              0.042614024 = score(doc=6987,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.42190298 = fieldWeight in 6987, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6987)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Describes the results of a project to develop conflation tools for searching databases of Latin text. Reports on the results of a questionnaire sent to 64 users of Latin text retrieval systems. Describes a Latin stemming algorithm that uses a simple longest match with some recoding but differs from most stemmers in its use of 2 separate suffix dictionaries for processing query and database words. Describes a retrieval system in which a user inputs the principal component of their search term, these components are stemmed and the resulting stems matched against the noun based and verb based stem dictionaries. Evaluates the system, describing its limitations, and a more complex system
  16. Ruge, G.; Schwarz, C.: Linguistically based term associations : a new semantic component for a hyperterm system (1990) 0.01
    0.012684705 = product of:
      0.05073882 = sum of:
        0.036679838 = weight(_text_:retrieval in 5544) [ClassicSimilarity], result of:
          0.036679838 = score(doc=5544,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.37811437 = fieldWeight in 5544, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5544)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 5544) [ClassicSimilarity], result of:
              0.028117962 = score(doc=5544,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 5544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5544)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    REALIST (Retrieval Aids by Linguistics and Statistics) is a tool which supplies the user of free text information retrieval systems with information about the terms in the databases. The resulting tables of terms show term relations according to their meaning in the database and form a kind of 'road map' of the database to give the user orientation help
  17. Gillaspie, L.: ¬The role of linguistic phenomena in retrieval performance (1995) 0.01
    0.012684705 = product of:
      0.05073882 = sum of:
        0.036679838 = weight(_text_:retrieval in 3861) [ClassicSimilarity], result of:
          0.036679838 = score(doc=3861,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.37811437 = fieldWeight in 3861, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3861)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 3861) [ClassicSimilarity], result of:
              0.028117962 = score(doc=3861,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 3861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3861)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This progress report presents findings from a failure analysis of 2 commercial full text computer assisted legal research (CALR) systems. Linguistic analyzes of unretrieved documents als false drops reveal a number of potential causes for performance problems in these databases, ranging from synonymy and homography to discourse level cohesive relations. Ecxamines and discusses examples of natural language phenomena that affects Boolean retrieval system performance
  18. Hayes, P.J.; Knecht, L.E.; Cellio, M.J.: ¬A news story categorization system (1988) 0.01
    0.012498607 = product of:
      0.049994428 = sum of:
        0.032420702 = weight(_text_:retrieval in 1954) [ClassicSimilarity], result of:
          0.032420702 = score(doc=1954,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 1954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1954)
        0.017573725 = product of:
          0.03514745 = sum of:
            0.03514745 = weight(_text_:system in 1954) [ClassicSimilarity], result of:
              0.03514745 = score(doc=1954,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.3479797 = fieldWeight in 1954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1954)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.518-526
  19. Ekmekcioglu, F.C.; Lynch, M.F.; Willet, P.: Development and evaluation of conflation techniques for the implementation of a document retrieval system for Turkish text databases (1995) 0.01
    0.012372989 = product of:
      0.049491957 = sum of:
        0.032094855 = weight(_text_:retrieval in 5797) [ClassicSimilarity], result of:
          0.032094855 = score(doc=5797,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 5797, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5797)
        0.017397102 = product of:
          0.034794204 = sum of:
            0.034794204 = weight(_text_:system in 5797) [ClassicSimilarity], result of:
              0.034794204 = score(doc=5797,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.34448233 = fieldWeight in 5797, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5797)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Considers language processing techniques necessary for the implementation of a document retrieval system for Turkish text databases. Introduces the main characteristics of the Turkish language. Discusses the development of a stopword list and the evaluation of a stemming algorithm that takes account of the language's morphological structure. A 2 level description of Turkish morphology developed in Bilkent University, Ankara, is incorporated into a morphological parser, PC-KIMMO, to carry out stemming in Turkish databases. Describes the evaluation of string similarity measures - n-gram matching techniques - for Turkish. Reports experiments on 6 different Turkish text corpora
  20. Airio, E.: Who benefits from CLIR in web retrieval? (2008) 0.01
    0.01236227 = product of:
      0.04944908 = sum of:
        0.038904842 = weight(_text_:retrieval in 2342) [ClassicSimilarity], result of:
          0.038904842 = score(doc=2342,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.40105087 = fieldWeight in 2342, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 2342) [ClassicSimilarity], result of:
              0.021088472 = score(doc=2342,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Purpose - The aim of the current paper is to test whether query translation is beneficial in web retrieval. Design/methodology/approach - The language pairs were Finnish-Swedish, English-German and Finnish-French. A total of 12-18 participants were recruited for each language pair. Each participant performed four retrieval tasks. The author's aim was to compare the performance of the translated queries with that of the target language queries. Thus, the author asked participants to formulate a source language query and a target language query for each task. The source language queries were translated into the target language utilizing a dictionary-based system. In English-German, also machine translation was utilized. The author used Google as the search engine. Findings - The results differed depending on the language pair. The author concluded that the dictionary coverage had an effect on the results. On average, the results of query-translation were better than in the traditional laboratory tests. Originality/value - This research shows that query translation in web is beneficial especially for users with moderate and non-active language skills. This is valuable information for developers of cross-language information retrieval systems.

Authors

Years

Languages

Types

  • a 249
  • m 18
  • el 15
  • s 11
  • x 5
  • p 2
  • pat 1
  • r 1
  • More… Less…