Search (256 results, page 2 of 13)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Rau, L.F.; Jacobs, P.S.; Zernik, U.: Information extraction and text summarization using linguistic knowledge acquisition (1989) 0.01
    0.005465982 = product of:
      0.03826187 = sum of:
        0.03826187 = product of:
          0.09565468 = sum of:
            0.05074066 = weight(_text_:retrieval in 6683) [ClassicSimilarity], result of:
              0.05074066 = score(doc=6683,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46309367 = fieldWeight in 6683, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
            0.044914022 = weight(_text_:system in 6683) [ClassicSimilarity], result of:
              0.044914022 = score(doc=6683,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3936941 = fieldWeight in 6683, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Storing and accessing texts in a conceptual format has a number of advantages over traditional document retrieval methods. A conceptual format facilitates natural language access to text information. It can support imprecise and inexact queries, conceptual information summarisation, and, ultimately, document translation. Describes 2 methods which have been implemented in a prototype intelligent information retrieval system calles SCISOR (System for Conceptual Information Summarisation, Organization and Retrieval). Describes the text processing, language acquisition, and summarisation components of SCISOR
  2. Hsinchun, C.: Knowledge-based document retrieval framework and design (1992) 0.01
    0.005465982 = product of:
      0.03826187 = sum of:
        0.03826187 = product of:
          0.09565468 = sum of:
            0.05074066 = weight(_text_:retrieval in 6686) [ClassicSimilarity], result of:
              0.05074066 = score(doc=6686,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46309367 = fieldWeight in 6686, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6686)
            0.044914022 = weight(_text_:system in 6686) [ClassicSimilarity], result of:
              0.044914022 = score(doc=6686,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3936941 = fieldWeight in 6686, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6686)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Presents research on the design of knowledge-based document retrieval systems in which a semantic network was adopted to represent subject knowledge and classification scheme knowledge and experts' search strategies and user modelling capability were modelled as procedural knowledge. These functionalities were incorporated into a prototype knowledge-based retrieval system, Metacat. Describes a system, the design of which was based on the blackboard architecture, which was able to create a user profile, identify task requirements, suggest heuristics-based search strategies, perform semantic-based search assistance, and assist online query refinement
  3. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.01
    0.00546202 = product of:
      0.01911707 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 75) [ClassicSimilarity], result of:
              0.02197135 = score(doc=75,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.2 = coord(1/5)
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
              0.0294456 = score(doc=75,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  4. Hess, M.: ¬An incrementally extensible document retrieval system based on linguistic and logical principles (1992) 0.01
    0.0051648915 = product of:
      0.03615424 = sum of:
        0.03615424 = product of:
          0.0903856 = sum of:
            0.049129434 = weight(_text_:retrieval in 2413) [ClassicSimilarity], result of:
              0.049129434 = score(doc=2413,freq=10.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.44838852 = fieldWeight in 2413, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2413)
            0.041256163 = weight(_text_:system in 2413) [ClassicSimilarity], result of:
              0.041256163 = score(doc=2413,freq=6.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.36163113 = fieldWeight in 2413, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2413)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Most natural language based document retrieval systems use the syntax structures of constituent phrases of documents as index terms. Many of these systems also attempt to reduce the syntactic variability of natural language by some normalisation procedure applied to these syntax structures. However, the retrieval performance of such systems remains fairly disappointing. Some systems therefore use a meaning representation language to index and retrieve documents. In this paper, a system is presented that uses Horn Clause Logic as meaning representation language, employs advanced techniques from Natural Language Processing to achieve incremental extensibility, and uses methods from Logic Programming to achieve robustness in the face of insufficient data. An Incrementally Extensible Document Retrieval System Based on Linguistic and Logical Principles.
    Source
    SIGIR '92: Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
  5. Fóris, A.: Network theory and terminology (2013) 0.01
    0.005109501 = product of:
      0.017883252 = sum of:
        0.0056142528 = product of:
          0.028071264 = sum of:
            0.028071264 = weight(_text_:system in 1365) [ClassicSimilarity], result of:
              0.028071264 = score(doc=1365,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24605882 = fieldWeight in 1365, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.2 = coord(1/5)
        0.0122690005 = product of:
          0.024538001 = sum of:
            0.024538001 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.024538001 = score(doc=1365,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper aims to present the relations of network theory and terminology. The model of scale-free networks, which has been recently developed and widely applied since, can be effectively used in terminology research as well. Operation based on the principle of networks is a universal characteristic of complex systems. Networks are governed by general laws. The model of scale-free networks can be viewed as a statistical-probability model, and it can be described with mathematical tools. Its main feature is that "everything is connected to everything else," that is, every node is reachable (in a few steps) starting from any other node; this phenomena is called "the small world phenomenon." The existence of a linguistic network and the general laws of the operation of networks enable us to place issues of language use in the complex system of relations that reveal the deeper connection s between phenomena with the help of networks embedded in each other. The realization of the metaphor that language also has a network structure is the basis of the classification methods of the terminological system, and likewise of the ways of creating terminology databases, which serve the purpose of providing easy and versatile accessibility to specialised knowledge.
    Date
    2. 9.2014 21:22:48
  6. Metzler, D.P.; Haas, S.W.; Cosic, C.L.; Wheeler, L.H.: Constituent object parsing for information retrieval and similar text processing problems (1989) 0.00
    0.0049339198 = product of:
      0.03453744 = sum of:
        0.03453744 = product of:
          0.086343594 = sum of:
            0.04142957 = weight(_text_:retrieval in 2858) [ClassicSimilarity], result of:
              0.04142957 = score(doc=2858,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.37811437 = fieldWeight in 2858, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2858)
            0.044914022 = weight(_text_:system in 2858) [ClassicSimilarity], result of:
              0.044914022 = score(doc=2858,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3936941 = fieldWeight in 2858, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2858)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes the architecture and functioning of the Constituent Object Parser. This system has been developed specially for text processing applications such as information retrieval, which can benefit from structural comparisons between elements of text such as a query and a potentially relevant abstract. Describes the general way in which this objective influenced the design of the system.
  7. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0687064 = score(doc=3164,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0687064 = score(doc=4506,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    8.10.2000 11:52:22
  9. Somers, H.: Example-based machine translation : Review article (1999) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0687064 = score(doc=6672,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    31. 7.1996 9:22:19
  10. New tools for human translators (1997) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0687064 = score(doc=1179,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    31. 7.1996 9:22:19
  11. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0687064 = score(doc=3117,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    28. 2.1999 10:48:22
  12. Oard, D.W.; He, D.; Wang, J.: User-assisted query translation for interactive cross-language information retrieval (2008) 0.00
    0.004896801 = product of:
      0.034277607 = sum of:
        0.034277607 = product of:
          0.085694015 = sum of:
            0.0380555 = weight(_text_:retrieval in 2030) [ClassicSimilarity], result of:
              0.0380555 = score(doc=2030,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34732026 = fieldWeight in 2030, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2030)
            0.047638513 = weight(_text_:system in 2030) [ClassicSimilarity], result of:
              0.047638513 = score(doc=2030,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.41757566 = fieldWeight in 2030, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2030)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Interactive Cross-Language Information Retrieval (CLIR), a process in which searcher and system collaborate to find documents that satisfy an information need regardless of the language in which those documents are written, calls for designs in which synergies between searcher and system can be leveraged so that the strengths of one can cover weaknesses of the other. This paper describes an approach that employs user-assisted query translation to help searchers better understand the system's operation. Supporting interaction and interface designs are introduced, and results from three user studies are presented. The results indicate that experienced searchers presented with this new system evolve new search strategies that make effective use of the new capabilities, that they achieve retrieval effectiveness comparable to results obtained using fully automatic techniques, and that reported satisfaction with support for cross-language searching increased. The paper concludes with a description of a freely available interactive CLIR system that incorporates lessons learned from this research.
  13. Greengrass, M.: Conflation methods for searching databases of Latin text (1996) 0.00
    0.00482189 = product of:
      0.033753227 = sum of:
        0.033753227 = product of:
          0.08438307 = sum of:
            0.036250874 = weight(_text_:retrieval in 6987) [ClassicSimilarity], result of:
              0.036250874 = score(doc=6987,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 6987, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6987)
            0.048132192 = weight(_text_:system in 6987) [ClassicSimilarity], result of:
              0.048132192 = score(doc=6987,freq=6.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.42190298 = fieldWeight in 6987, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6987)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes the results of a project to develop conflation tools for searching databases of Latin text. Reports on the results of a questionnaire sent to 64 users of Latin text retrieval systems. Describes a Latin stemming algorithm that uses a simple longest match with some recoding but differs from most stemmers in its use of 2 separate suffix dictionaries for processing query and database words. Describes a retrieval system in which a user inputs the principal component of their search term, these components are stemmed and the resulting stems matched against the noun based and verb based stem dictionaries. Evaluates the system, describing its limitations, and a more complex system
  14. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.00
    0.0046747252 = product of:
      0.032723077 = sum of:
        0.032723077 = product of:
          0.08180769 = sum of:
            0.04250792 = weight(_text_:retrieval in 6671) [ClassicSimilarity], result of:
              0.04250792 = score(doc=6671,freq=22.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3879561 = fieldWeight in 6671, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
            0.039299767 = weight(_text_:system in 6671) [ClassicSimilarity], result of:
              0.039299767 = score(doc=6671,freq=16.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34448233 = fieldWeight in 6671, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Content
    HARMAN, D.: Relevance feedback revisited; AALBERSBERG, I.J.: Incremental relevance feedback; TAGUE-SUTCLIFFE, J.: Measuring the informativeness of a retrieval process; LEWIS, D.D.: An evaluation of phrasal and clustered representations on a text categorization task; BLOSSEVILLE, M.J., G. HÉBRAIL, M.G. MONTEIL u. N. PÉNOT: Automatic document classification: natural language processing, statistical analysis, and expert system techniques used together; MASAND, B., G. LINOFF u. D. WALTZ: Classifying news stories using memory based reasoning; KEEN, E.M.: Term position ranking: some new test results; CROUCH, C.J. u. B. YANG: Experiments in automatic statistical thesaurus construction; GREFENSTETTE, G.: Use of syntactic context to produce term association lists for text retrieval; ANICK, P.G. u. R.A. FLYNN: Versioning of full-text information retrieval system; BURKOWSKI, F.J.: Retrieval activities in a database consisting of heterogeneous collections; DEERWESTER, S.C., K. WACLENA u. M. LaMAR: A textual object management system; NIE, J.-Y.:Towards a probabilistic modal logic for semantic-based information retrieval; WANG, A.W., S.K.M. WONG u. Y.Y. YAO: An analysis of vector space models based on computational geometry; BARTELL, B.T., G.W. COTTRELL u. R.K. BELEW: Latent semantic indexing is an optimal special case of multidimensional scaling; GLAVITSCH, U. u. P. SCHÄUBLE: A system for retrieving speech documents; MARGULIS, E.L.: N-Poisson document modelling; HESS, M.: An incrementally extensible document retrieval system based on linguistics and logical principles; COOPER, W.S., F.C. GEY u. D.P. DABNEY: Probabilistic retrieval based on staged logistic regression; FUHR, N.: Integration of probabilistic fact and text retrieval; CROFT, B., L.A. SMITH u. H. TURTLE: A loosely-coupled integration of a text retrieval system and an object-oriented database system; DUMAIS, S.T. u. J. NIELSEN: Automating the assignement of submitted manuscripts to reviewers; GOST, M.A. u. M. MASOTTI: Design of an OPAC database to permit different subject searching accesses; ROBERTSON, A.M. u. P. WILLETT: Searching for historical word forms in a database of 17th century English text using spelling correction methods; FAX, E.A., Q.F. CHEN u. L.S. HEATH: A faster algorithm for constructing minimal perfect hash functions; MOFFAT, A. u. J. ZOBEL: Parameterised compression for sparse bitmaps; GRANDI, F., P. TIBERIO u. P. Zezula: Frame-sliced patitioned parallel signature files; ALLEN, B.: Cognitive differences in end user searching of a CD-ROM index; SONNENWALD, D.H.: Developing a theory to guide the process of designing information retrieval systems; CUTTING, D.R., J.O. PEDERSEN, D. KARGER, u. J.W. TUKEY: Scatter/ Gather: a cluster-based approach to browsing large document collections; CHALMERS, M. u. P. CHITSON: Bead: Explorations in information visualization; WILLIAMSON, C. u. B. SHNEIDERMAN: The dynamic HomeFinder: evaluating dynamic queries in a real-estate information exploring system
  15. Herrera-Viedma, E.; Cordón, O.; Herrera, J.C.; Luqe, M.: ¬An IRS based on multi-granular lnguistic information (2003) 0.00
    0.004497754 = product of:
      0.031484276 = sum of:
        0.031484276 = product of:
          0.07871069 = sum of:
            0.03107218 = weight(_text_:retrieval in 2740) [ClassicSimilarity], result of:
              0.03107218 = score(doc=2740,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2835858 = fieldWeight in 2740, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2740)
            0.047638513 = weight(_text_:system in 2740) [ClassicSimilarity], result of:
              0.047638513 = score(doc=2740,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.41757566 = fieldWeight in 2740, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2740)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    An information retrieval system (IRS) based on fuzzy multi-granular linguistic information is proposed. The system has an evaluation method to process multi-granular linguistic information, in such a way that the inputs to the IRS are represented in a different linguistic domain than the outputs. The system accepts Boolean queries whose terms are weighted by means of the ordinal linguistic values represented by the linguistic variable "Importance" assessed an a label set S. The system evaluates the weighted queries according to a threshold semantic and obtains the linguistic retrieval status values (RSV) of documents represented by a linguistic variable "Relevance" expressed in a different label set S'. The advantage of this linguistic IRS with respect to others is that the use of the multi-granular linguistic information facilitates and improves the IRS-user interaction
  16. Hayes, P.J.; Knecht, L.E.; Cellio, M.J.: ¬A news story categorization system (1988) 0.00
    0.0043610106 = product of:
      0.030527074 = sum of:
        0.030527074 = product of:
          0.07631768 = sum of:
            0.03661892 = weight(_text_:retrieval in 1954) [ClassicSimilarity], result of:
              0.03661892 = score(doc=1954,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 1954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1954)
            0.03969876 = weight(_text_:system in 1954) [ClassicSimilarity], result of:
              0.03969876 = score(doc=1954,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.3479797 = fieldWeight in 1954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1954)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.518-526
  17. Ekmekcioglu, F.C.; Lynch, M.F.; Willet, P.: Development and evaluation of conflation techniques for the implementation of a document retrieval system for Turkish text databases (1995) 0.00
    0.0043171803 = product of:
      0.030220259 = sum of:
        0.030220259 = product of:
          0.075550646 = sum of:
            0.036250874 = weight(_text_:retrieval in 5797) [ClassicSimilarity], result of:
              0.036250874 = score(doc=5797,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 5797, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5797)
            0.039299767 = weight(_text_:system in 5797) [ClassicSimilarity], result of:
              0.039299767 = score(doc=5797,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34448233 = fieldWeight in 5797, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5797)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Considers language processing techniques necessary for the implementation of a document retrieval system for Turkish text databases. Introduces the main characteristics of the Turkish language. Discusses the development of a stopword list and the evaluation of a stemming algorithm that takes account of the language's morphological structure. A 2 level description of Turkish morphology developed in Bilkent University, Ankara, is incorporated into a morphological parser, PC-KIMMO, to carry out stemming in Turkish databases. Describes the evaluation of string similarity measures - n-gram matching techniques - for Turkish. Reports experiments on 6 different Turkish text corpora
  18. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.00
    0.0042065145 = product of:
      0.0294456 = sum of:
        0.0294456 = product of:
          0.0588912 = sum of:
            0.0588912 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.0588912 = score(doc=4888,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 3.2013 14:56:22
  19. Ruge, G.; Schwarz, C.: Linguistically based term associations : a new semantic component for a hyperterm system (1990) 0.00
    0.0041822046 = product of:
      0.029275432 = sum of:
        0.029275432 = product of:
          0.07318858 = sum of:
            0.04142957 = weight(_text_:retrieval in 5544) [ClassicSimilarity], result of:
              0.04142957 = score(doc=5544,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.37811437 = fieldWeight in 5544, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5544)
            0.03175901 = weight(_text_:system in 5544) [ClassicSimilarity], result of:
              0.03175901 = score(doc=5544,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 5544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5544)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    REALIST (Retrieval Aids by Linguistics and Statistics) is a tool which supplies the user of free text information retrieval systems with information about the terms in the databases. The resulting tables of terms show term relations according to their meaning in the database and form a kind of 'road map' of the database to give the user orientation help
  20. Gillaspie, L.: ¬The role of linguistic phenomena in retrieval performance (1995) 0.00
    0.0041822046 = product of:
      0.029275432 = sum of:
        0.029275432 = product of:
          0.07318858 = sum of:
            0.04142957 = weight(_text_:retrieval in 3861) [ClassicSimilarity], result of:
              0.04142957 = score(doc=3861,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.37811437 = fieldWeight in 3861, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3861)
            0.03175901 = weight(_text_:system in 3861) [ClassicSimilarity], result of:
              0.03175901 = score(doc=3861,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 3861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3861)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This progress report presents findings from a failure analysis of 2 commercial full text computer assisted legal research (CALR) systems. Linguistic analyzes of unretrieved documents als false drops reveal a number of potential causes for performance problems in these databases, ranging from synonymy and homography to discourse level cohesive relations. Ecxamines and discusses examples of natural language phenomena that affects Boolean retrieval system performance

Authors

Years

Languages

Types

  • a 225
  • m 16
  • s 11
  • el 10
  • x 5
  • p 2
  • pat 1
  • r 1
  • More… Less…