Search (11 results, page 1 of 1)

  • × theme_ss:"Retrievalalgorithmen"
  • × type_ss:"m"
  1. Cross-language information retrieval (1998) 0.07
    0.071073726 = product of:
      0.09476496 = sum of:
        0.0035556336 = weight(_text_:e in 6299) [ClassicSimilarity], result of:
          0.0035556336 = score(doc=6299,freq=4.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.056147262 = fieldWeight in 6299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.05990552 = weight(_text_:et in 6299) [ClassicSimilarity], result of:
          0.05990552 = score(doc=6299,freq=10.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.28979343 = fieldWeight in 6299, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.031303804 = product of:
          0.06260761 = sum of:
            0.06260761 = weight(_text_:al in 6299) [ClassicSimilarity], result of:
              0.06260761 = score(doc=6299,freq=12.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.31007302 = fieldWeight in 6299, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
    Language
    e
  2. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.01
    0.009975691 = product of:
      0.019951383 = sum of:
        0.0050284253 = weight(_text_:e in 1753) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=1753,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 1753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1753)
        0.014922958 = product of:
          0.029845916 = sum of:
            0.029845916 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
              0.029845916 = score(doc=1753,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.19345059 = fieldWeight in 1753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1753)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2008 12:26:32
    Language
    e
  3. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (1998) 0.00
    0.0030170549 = product of:
      0.0120682195 = sum of:
        0.0120682195 = weight(_text_:e in 2182) [ClassicSimilarity], result of:
          0.0120682195 = score(doc=2182,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.19057012 = fieldWeight in 2182, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.09375 = fieldNorm(doc=2182)
      0.25 = coord(1/4)
    
    Language
    e
  4. Brenner, E.H.: Beyond Boolean : new approaches in information retrieval; the quest for intuitive online search systems past, present & future (1995) 0.00
    0.0017599487 = product of:
      0.0070397947 = sum of:
        0.0070397947 = weight(_text_:e in 2547) [ClassicSimilarity], result of:
          0.0070397947 = score(doc=2547,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.1111659 = fieldWeight in 2547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2547)
      0.25 = coord(1/4)
    
    Language
    e
  5. Computational information retrieval (2001) 0.00
    0.0015085274 = product of:
      0.0060341097 = sum of:
        0.0060341097 = weight(_text_:e in 4167) [ClassicSimilarity], result of:
          0.0060341097 = score(doc=4167,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.09528506 = fieldWeight in 4167, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.046875 = fieldNorm(doc=4167)
      0.25 = coord(1/4)
    
    Language
    e
  6. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.00
    0.0015085274 = product of:
      0.0060341097 = sum of:
        0.0060341097 = weight(_text_:e in 5777) [ClassicSimilarity], result of:
          0.0060341097 = score(doc=5777,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.09528506 = fieldWeight in 5777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.046875 = fieldNorm(doc=5777)
      0.25 = coord(1/4)
    
    Language
    e
  7. Lavrenko, V.: ¬A generative theory of relevance (2009) 0.00
    0.0012571063 = product of:
      0.0050284253 = sum of:
        0.0050284253 = weight(_text_:e in 3306) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=3306,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 3306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3306)
      0.25 = coord(1/4)
    
    Language
    e
  8. Lalmas, M.: XML retrieval (2009) 0.00
    0.0012571063 = product of:
      0.0050284253 = sum of:
        0.0050284253 = weight(_text_:e in 4998) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=4998,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 4998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4998)
      0.25 = coord(1/4)
    
    Language
    e
  9. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.0010666901 = product of:
      0.0042667603 = sum of:
        0.0042667603 = weight(_text_:e in 6) [ClassicSimilarity], result of:
          0.0042667603 = score(doc=6,freq=4.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.06737672 = fieldWeight in 6, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.25 = coord(1/4)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Language
    e
  10. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.00
    0.001005685 = product of:
      0.00402274 = sum of:
        0.00402274 = weight(_text_:e in 7) [ClassicSimilarity], result of:
          0.00402274 = score(doc=7,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.063523374 = fieldWeight in 7, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.25 = coord(1/4)
    
    Language
    e
  11. Effektive Information Retrieval Verfahren in Theorie und Praxis : ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005 (2006) 0.00
    5.028425E-4 = product of:
      0.00201137 = sum of:
        0.00201137 = weight(_text_:e in 5973) [ClassicSimilarity], result of:
          0.00201137 = score(doc=5973,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.031761687 = fieldWeight in 5973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.015625 = fieldNorm(doc=5973)
      0.25 = coord(1/4)
    
    Footnote
    "Evaluierung", das Thema des dritten Kapitels, ist in seiner Breite nicht auf das Information Retrieval beschränkt sondern beinhaltet ebenso einzelne Aspekte der Bereiche Mensch-Maschine-Interaktion sowie des E-Learning. Michael Muck und Marco Winter von der Stiftung Wissenschaft und Politik sowie dem Informationszentrum Sozialwissenschaften thematisieren in ihrem Beitrag den Einfluss der Fragestellung (Topic) auf die Bewertung von Relevanz und zeigen Verfahrensweisen für die Topic-Erstellung auf, die beim Cross Language Evaluation Forum (CLEF) Anwendung finden. Im darauf folgenden Aufsatz stellt Thomas Mandl verschiedene Evaluierungsinitiativen im Information Retrieval und aktuelle Entwicklungen dar. Joachim Pfister erläutert in seinem Beitrag das automatisierte Gruppieren, das sogenannte Clustering, von Patent-Dokumenten in den Datenbanken des Fachinformationszentrums Karlsruhe und evaluiert unterschiedliche Clusterverfahren auf Basis von Nutzerbewertungen. Ralph Kölle, Glenn Langemeier und Wolfgang Semar widmen sich dem kollaborativen Lernen unter den speziellen Bedingungen des Programmierens. Dabei werden das System VitaminL zur synchronen Bearbeitung von Programmieraufgaben und das Kennzahlensystem K-3 für die Bewertung kollaborativer Zusammenarbeit in einer Lehrveranstaltung angewendet. Der aktuelle Forschungsschwerpunkt der Hildesheimer Informationswissenschaft zeichnet sich im vierten Kapitel unter dem Thema "Multilinguale Systeme" ab. Hier finden sich die meisten Beiträge des Tagungsbandes wieder. Olga Tartakovski und Margaryta Shramko beschreiben und prüfen das System Langldent, das die Sprache von mono- und multilingualen Texten identifiziert. Die Eigenheiten der japanischen Schriftzeichen stellt Nina Kummer dar und vergleicht experimentell die unterschiedlichen Techniken der Indexierung. Suriya Na Nhongkai und Hans-Joachim Bentz präsentieren und prüfen eine bilinguale Suche auf Basis von Konzeptnetzen, wobei die Konzeptstruktur das verbindende Elemente der beiden Textsammlungen darstellt. Das Entwickeln und Evaluieren eines mehrsprachigen Question-Answering-Systems im Rahmen des Cross Language Evaluation Forum (CLEF), das die alltagssprachliche Formulierung von konkreten Fragestellungen ermöglicht, wird im Beitrag von Robert Strötgen, Thomas Mandl und Rene Schneider thematisiert. Den Schluss bildet der Aufsatz von Niels Jensen, der ein mehrsprachiges Web-Retrieval-System ebenfalls im Zusammenhang mit dem CLEF anhand des multilingualen EuroGOVKorpus evaluiert.

Languages

  • e 10
  • m 1