Search (139 results, page 1 of 7)

  • × year_i:[1980 TO 1990}
  1. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.14
    0.14113252 = product of:
      0.21169877 = sum of:
        0.16489306 = weight(_text_:query in 2134) [ClassicSimilarity], result of:
          0.16489306 = score(doc=2134,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.71889395 = fieldWeight in 2134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.109375 = fieldNorm(doc=2134)
        0.046805713 = product of:
          0.09361143 = sum of:
            0.09361143 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.09361143 = score(doc=2134,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    30. 3.2001 13:32:22
  2. Schroder, J.J.: Query refining (1983) 0.08
    0.07852051 = product of:
      0.23556152 = sum of:
        0.23556152 = weight(_text_:query in 5131) [ClassicSimilarity], result of:
          0.23556152 = score(doc=5131,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            1.0269914 = fieldWeight in 5131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.15625 = fieldNorm(doc=5131)
      0.33333334 = coord(1/3)
    
  3. Salton, G.; Voorhees, E.; Fox, E.A.: ¬A comparison of two methods for Boolean query relevance feedback (1984) 0.06
    0.06281641 = product of:
      0.18844922 = sum of:
        0.18844922 = weight(_text_:query in 5446) [ClassicSimilarity], result of:
          0.18844922 = score(doc=5446,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.8215931 = fieldWeight in 5446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.125 = fieldNorm(doc=5446)
      0.33333334 = coord(1/3)
    
  4. Chang, N.S.; Fu, K.S.: Picture query languages for pictorial data-base systems (1981) 0.06
    0.06281641 = product of:
      0.18844922 = sum of:
        0.18844922 = weight(_text_:query in 5635) [ClassicSimilarity], result of:
          0.18844922 = score(doc=5635,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.8215931 = fieldWeight in 5635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.125 = fieldNorm(doc=5635)
      0.33333334 = coord(1/3)
    
  5. Salton, G.: ¬A simple blueprint for automatic Boolean query processing (1988) 0.06
    0.06281641 = product of:
      0.18844922 = sum of:
        0.18844922 = weight(_text_:query in 6774) [ClassicSimilarity], result of:
          0.18844922 = score(doc=6774,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.8215931 = fieldWeight in 6774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.125 = fieldNorm(doc=6774)
      0.33333334 = coord(1/3)
    
  6. Robertson, S.E.: On relevance weight estimation and query expansion (1986) 0.06
    0.055522382 = product of:
      0.16656715 = sum of:
        0.16656715 = weight(_text_:query in 3875) [ClassicSimilarity], result of:
          0.16656715 = score(doc=3875,freq=4.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.7261926 = fieldWeight in 3875, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.078125 = fieldNorm(doc=3875)
      0.33333334 = coord(1/3)
    
    Abstract
    A Bayesian argument is used to suggest modifications to the Robertson/Sparck Jones relevance weighting formula, to accomodate the addition to the query of terms taken from the relevant documents identified during the search
  7. Kuhlen, R.; Hammwöhner, R.; Sonnenberger, G.; Thiel, U.: TWRM-TOPOGRAPHIC : ein wissensbasiertes System zur situationsgerechten Aufbereitung und Präsentation von Textinformation in graphischen Retrievaldialogen (1988) 0.05
    0.05040447 = product of:
      0.075606704 = sum of:
        0.05889038 = weight(_text_:query in 3113) [ClassicSimilarity], result of:
          0.05889038 = score(doc=3113,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.25674784 = fieldWeight in 3113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3113)
        0.016716326 = product of:
          0.03343265 = sum of:
            0.03343265 = weight(_text_:22 in 3113) [ClassicSimilarity], result of:
              0.03343265 = score(doc=3113,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.19345059 = fieldWeight in 3113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3113)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ausgehend von einer Diskussion der Gestaltungskonzeptionen und der Leistungsfähigkeit heutiger Volltext-Retrieval-Systeme wird ein Überblick über den Leistungsumfang des für die Aufbereitung und Präsentation von Textinformation zuständigen Systems TWRM-TOPOGRAPHIC gegeben. TWRM-TOPOGRAPHIC ist Teil eines neuartigen Informationssystems, das sich auf inhaltsorientierte Repräsentation von Volltexten stützt. Die beiden wesentlichsten Leistungsmerkmale von TWRM-TOPOGRAPHIC sind die graphische Retrievaldialogführung und die flexible, situationsgerechte Aufbereitung und Präsentation von Textwissen: Die Dialogführung erlaubt dem Benutzer die direkte Navigation in den auf dem Bildschirm graphisch dargestellten Wissensstrukturen, die Selektion dargestellter Objekte zur Formulierung einer Query sowie das Wechseln des Abstraktionsniveaus der dargestellten Textinformation. Die Aufbereitung und die Präsentation von Textwissen sind kognitiv-ergonomisch begründet und berücksichtigen sowohl die begrenzte Aufnahmekapazität der Benutzer als auch die Bedeutung der zeitlichen Anordnung von Informationseinheiten für die Wahmehmungs- und Gedächtnisleistung der Rezipienten. Textwissen wird in unterschiedlichen Abstraktionsstufen präsentiert: von einer sehr generischen Ebene über Wissensgraphen, automatisch generierten Abstracts bis zur diskursiven Form der Textpassage. Die Generierungskomponente des Systems leistet einen Beitrag zum situationsgerechten Systemverhalten dadurch, daß sie aus semantischen Text-Repräsentationsstrukturen unter Berücksichtigung textueller Wohlgeformtheitsbedingungen benutzerangepaßte Abstracts mit unterschiedlichem Themenschwerpunkt und variabler Ausführlichkeit produziert. Die Erprobung verschiedener LayoutVerfahren im Projekt TWRM-TOPOGRAPHIC wird durch ein flexibles, objektorientiert spezifiziertes User-Interface-Mangagement-System (UIMS) unterstützt, dessen Objektklassen und deren Interaktionsmöglichkeiten vorgestellt werden. Die Darstellung des Systems wird mit einem ausführlichen Dialogbeispiel abgeschlossen, das die Funktion des Interface und die Wirkung der drei zentralen Operatoren (Select, Zoom und Browse) im Retrievaldialog illustriert.
    Date
    15. 1.2005 14:10:22
  8. Defude, B.: Knowledge based systems versus thesaurus : an architecture problem about expert systems design (1984) 0.04
    0.040800452 = product of:
      0.12240136 = sum of:
        0.12240136 = weight(_text_:query in 923) [ClassicSimilarity], result of:
          0.12240136 = score(doc=923,freq=6.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.5336404 = fieldWeight in 923, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=923)
      0.33333334 = coord(1/3)
    
    Abstract
    The use of expert systems within information retrieval systems (IRS) seems to be an interesting way, particularly for the query process. Nevertheless we must examine what knowledge we need. We think that the thesaurus may be the kernel of which knowledge: for this, we must define it larger than in classical IRS. After some recalls about what may be the principal features of a query expert system, we discuss about the relationship between thesaurus and a query expert system. The problem is to determine if the thesaurus must be integrated within the knowledge base. In fact this choice is an architecture problem of the expert system. We analyze, in parallel, the effects of this choice about thesaurus representation, expert system functionalities, expert system architecture
  9. Shore, M.L.: Variation between personal name headings and title page usage (1984) 0.04
    0.039691657 = product of:
      0.11907496 = sum of:
        0.11907496 = product of:
          0.23814993 = sum of:
            0.23814993 = weight(_text_:page in 2850) [ClassicSimilarity], result of:
              0.23814993 = score(doc=2850,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.86395055 = fieldWeight in 2850, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2850)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Deschâtelets, G.: ¬The three languages theory in information retrieval (1986) 0.03
    0.03331343 = product of:
      0.09994029 = sum of:
        0.09994029 = weight(_text_:query in 1635) [ClassicSimilarity], result of:
          0.09994029 = score(doc=1635,freq=4.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.43571556 = fieldWeight in 1635, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=1635)
      0.33333334 = coord(1/3)
    
    Abstract
    To an overwhelming extent, storage and retrieval systems were designed for information intermediaries who were specialists in formal, controlled documentation languages (e.g. classification systems, indexing languages) and who were then trained to utilize the query language of each retrieval system. However, with the advent of the microcomputer, there now exists, in the information retrieval industry, an obvious will to tackle both the professional and the personal information markets, as evidences by their more sophisticated yet more user-friendly systems and by the design and marketing of all sorts of interface software (front-end, gateway, intermediary). In order to make full advantage of these systems, the user must be able to master three different languages: the natural language of the discipline, the indexing language, and the system's query language. The author defines and characterizes each of these languages and identifies their issues and trends in the IR cycle and specifically in public online search services. Finally he proposes a theoretical model for the analysis of IR languages and suggests a few research avenues
  11. Craven, T.C.: Customized extracts based on Boolean queries and sentence dependency structures (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 789) [ClassicSimilarity], result of:
          0.09422461 = score(doc=789,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
      0.33333334 = coord(1/3)
    
    Abstract
    A method is described for using Boolean queries in automatically deriving customized extracts from a text in which semantic dependencies between sentences have been coded. Each sentence in the structured text is treated as defining a separate extract. This extract consists of the sentence and all other sentences on which the sentence is directly or indirectly dependent for its meaning. Extracts from a text that satisfy a given Boolean query are merged to eliminate duplicate sentences. A prototype implementation of the method has been developed within an experimental text structure management system (TEXNET)
  12. Kloeden, E. von: Beraten will gelernt sein (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 1517) [ClassicSimilarity], result of:
          0.09422461 = score(doc=1517,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 1517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=1517)
      0.33333334 = coord(1/3)
    
    Abstract
    Account of the enquiries addressed to a beginner at the central information desk of Oldenburg University library. The desk is placed were it is easily disturbed, especially by the telephone. For the more complex enquiries the librarian needs to question the user in order to formulate a specific query for searching. Librarians must choose whether to indicate the reference tools to the user or to find the information themselves. Hindrances are queues, lack of confidence in users, inexacteness of request.
  13. Metzler, D.P.; Haas, S.W.; Cosic, C.L.; Wheeler, L.H.: Constituent object parsing for information retrieval and similar text processing problems (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 2858) [ClassicSimilarity], result of:
          0.09422461 = score(doc=2858,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 2858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=2858)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the architecture and functioning of the Constituent Object Parser. This system has been developed specially for text processing applications such as information retrieval, which can benefit from structural comparisons between elements of text such as a query and a potentially relevant abstract. Describes the general way in which this objective influenced the design of the system.
  14. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 3566) [ClassicSimilarity], result of:
          0.09422461 = score(doc=3566,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 3566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.33333334 = coord(1/3)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  15. Raghavan, V.V.; Jung, G.S.; Bollmann, P.: ¬A critical investigation of recall and precision as measures of retrieval system performance (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 3606) [ClassicSimilarity], result of:
          0.09422461 = score(doc=3606,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 3606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=3606)
      0.33333334 = coord(1/3)
    
    Abstract
    Recall and precision are often used to evaluate the effectiveness of information retrieval systems. However, when the retrieval results are weakly ordered, in the sense that several documents have an identical retrieval status value with respect to a query, some prohabilistic notion of precision has to be introduced. Provides a comparative analysis of methods available for defining precision in a prohabilistic sense and to promote a better understanding of the various issues involved in retrieval performance evaluation.
  16. Prasher, R.G.: Evaluation of indexing system (1989) 0.03
    0.031408206 = product of:
      0.09422461 = sum of:
        0.09422461 = weight(_text_:query in 4998) [ClassicSimilarity], result of:
          0.09422461 = score(doc=4998,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.41079655 = fieldWeight in 4998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=4998)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes information system and its various components-index file construstion, query formulation and searching. Discusses an indexing system, and brings out the need for its evaluation. Explains the concept of the efficiency of indexing systems and discusses factors which control this efficiency. Gives criteria for evaluation. Discusses recall and precision ratios, as also noise ratio, novelty ratio, and exhaustivity and specificity and the impact of each on the efficiency of indexing system. Mention also various steps for evaluation.
  17. Wade, S.J.; Willett, P.; Bawden, D.: SIBRIS : the Sandwich Interactive Browsing and Ranking Information System (1989) 0.03
    0.027482178 = product of:
      0.08244653 = sum of:
        0.08244653 = weight(_text_:query in 2828) [ClassicSimilarity], result of:
          0.08244653 = score(doc=2828,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.35944697 = fieldWeight in 2828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2828)
      0.33333334 = coord(1/3)
    
    Abstract
    SIBRIS (Sandwich Interactive Browsing and Ranking Information System) is an interactive text retrieval system which has been developed to support the browsing of library and product files at Pfizer Central Research, Sandwich, UK. Once an initial ranking has been produced, the system will allow the user to select any document displayed on the screen at any point during the browse and to use that as the basis for another search. Facilities have been included to enable the user to keep track of the browse and to facilitate backtracking, thus allowing the user to move away from the original query to wander in and out of different areas of interest.
  18. Maron, M.E.: Probabilistic design principles for conventional and full-text retrieval systems (1988) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 7409) [ClassicSimilarity], result of:
          0.07066846 = score(doc=7409,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
      0.33333334 = coord(1/3)
    
    Abstract
    In order for conventionally designed commercial document retrieval systems to perform perfectly, the following 2 (logical) conditions must be satisfied for every search: there exists a document property (or combinations of properties) that belongs to those (and only those) documents that are relevant; that property (or combination of properties) can be correctly guessed by the searcher. In general, the 1st assumption is false, and the second is impossible to satisfy; hence no conventional IR system can perform at a maximum level of effectiveness. However, different design principles can lead to improved performance. Presents a view of the document retrieval problem that shows that since the relationship between document properties (whether they be humanly assigned index terms or words that occur in the running text) and relevance is at best probabilistic, one should approach the design problem using probabilistic principles. It turns out that a front end system designed to permit searchers to attach probabilistically interpreted weights to their query terms could be adapted for conventional IR systems. Such an enhancement could lead to improved performance
  19. Marchionini, G.: Information-seeking strategies of novices using a full-text electronic encyclopedia (1989) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 2589) [ClassicSimilarity], result of:
          0.07066846 = score(doc=2589,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 2589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=2589)
      0.33333334 = coord(1/3)
    
    Abstract
    An exploratory study was conducted of elementary school children searching a full-text electronic encyclopedia on CD-ROM. 28 third and forth graders and 24 sixth graders conducted 2 assigned searches, one open-ended, the other one closed, after 2 demonstration sessions. Keystrokes captured by the computer and observer notes were used to examine user information-seeking strategies from a mental model perspective. Older searchers were more successful in finding required information, and took less time than younger searchers. No differences in total number of moves were found. Analysis of search patterns showed that novices used a heuristic, highly interactive search strategy. Searchers used sentence and phrase queries, indicating unique mental models for this search system. Most searchers accepted system defaults and used the AND connective in formulating queries. Transition matrix analysis showed that younger searchers generally favoured query refining moves and older searchers fovoured examining title and text moves. Suggestions for system designers were made and future research questions were identified
  20. Lochbaum, K.E.; Streeter, A.R.: Comparing and combining the effectiveness of latent semantic indexing and the ordinary vector space model for information retrieval (1989) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 3458) [ClassicSimilarity], result of:
          0.07066846 = score(doc=3458,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 3458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.33333334 = coord(1/3)
    
    Abstract
    A retrievalsystem was built to find individuals with appropriate expertise within a large research establishment on the basis of their authored documents. The expert-locating system uses a new method for automatic indexing and retrieval based on singular value decomposition, a matrix decomposition technique related to the factor analysis. Organizational groups, represented by the documents they write, and the terms contained in these documents, are fit simultaneously into a 100-dimensional "semantic" space. User queries are positioned in the semantic space, and the most similar groups are returned to the user. Here we compared the standard vector-space model with this new technique and found that combining the two methods improved performance over either alone. We also examined the effects of various experimental variables on the system`s retrieval accuracy. In particular, the effects of: term weighting functions in the semantic space construction and in query construction, suffix stripping, and using lexical units larger than a a single word were studied.

Languages

  • e 93
  • d 43
  • f 1
  • m 1
  • More… Less…

Types

  • a 109
  • m 19
  • s 6
  • u 2
  • x 2
  • ? 1
  • b 1
  • More… Less…

Classifications