Search (6 results, page 1 of 1)

  • × theme_ss:"Retrievalalgorithmen"
  • × type_ss:"m"
  1. Cross-language information retrieval (1998) 0.02
    0.01681123 = sum of:
      0.007491372 = product of:
        0.029965488 = sum of:
          0.029965488 = weight(_text_:authors in 6299) [ClassicSimilarity], result of:
            0.029965488 = score(doc=6299,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.12592064 = fieldWeight in 6299, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6299)
        0.25 = coord(1/4)
      0.009319857 = product of:
        0.018639714 = sum of:
          0.018639714 = weight(_text_:p in 6299) [ClassicSimilarity], result of:
            0.018639714 = score(doc=6299,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.099312946 = fieldWeight in 6299, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6299)
        0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
  2. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.01
    0.008989646 = product of:
      0.017979292 = sum of:
        0.017979292 = product of:
          0.07191717 = sum of:
            0.07191717 = weight(_text_:authors in 5777) [ClassicSimilarity], result of:
              0.07191717 = score(doc=5777,freq=2.0), product of:
                0.23797122 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052200247 = queryNorm
                0.30220953 = fieldWeight in 5777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5777)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This book discusses many of the key design issues for building search engines and emphazises the important role that applied mathematics can play in improving information retrieval. The authors discuss not only important data structures, algorithms, and software but also user-centered issues such as interfaces, manual indexing, and document preparation. They also present some of the current problems in information retrieval that many not be familiar to applied mathematicians and computer scientists and some of the driving computational methods (SVD, SDD) for automated conceptual indexing
  3. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.01
    0.008840516 = product of:
      0.017681032 = sum of:
        0.017681032 = product of:
          0.035362065 = sum of:
            0.035362065 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
              0.035362065 = score(doc=1753,freq=2.0), product of:
                0.18279637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052200247 = queryNorm
                0.19345059 = fieldWeight in 1753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 12:26:32
  4. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.01
    0.00847552 = product of:
      0.01695104 = sum of:
        0.01695104 = product of:
          0.06780416 = sum of:
            0.06780416 = weight(_text_:authors in 7) [ClassicSimilarity], result of:
              0.06780416 = score(doc=7,freq=4.0), product of:
                0.23797122 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052200247 = queryNorm
                0.28492588 = fieldWeight in 7, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  5. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.004494823 = product of:
      0.008989646 = sum of:
        0.008989646 = product of:
          0.035958584 = sum of:
            0.035958584 = weight(_text_:authors in 6) [ClassicSimilarity], result of:
              0.035958584 = score(doc=6,freq=2.0), product of:
                0.23797122 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052200247 = queryNorm
                0.15110476 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
  6. Mandl, T.: Tolerantes Information Retrieval : Neuronale Netze zur Erhöhung der Adaptivität und Flexibilität bei der Informationssuche (2001) 0.00
    0.0037279427 = product of:
      0.0074558854 = sum of:
        0.0074558854 = product of:
          0.014911771 = sum of:
            0.014911771 = weight(_text_:p in 5965) [ClassicSimilarity], result of:
              0.014911771 = score(doc=5965,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.079450354 = fieldWeight in 5965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    da nun nach fast 200 Seiten der Hauptteil der Dissertation folgt - die Vorstellung und Bewertung des bereits erwähnten COSIMIR Modells. Das COSIMIR Modell "berechnet die Ähnlichkeit zwischen den zwei anliegenden Input-Vektoren" (P.194). Der Output des Netzwerks wird an einem einzigen Knoten abgegriffen, an dem sich ein sogenannten Relevanzwert einstellt, wenn die Berechnungen der Gewichtungen interner Knoten zum Abschluss kommen. Diese Gewichtungen hängen von den angelegten Inputvektoren, aus denen die Gewichte der ersten Knotenschicht ermittelt werden, und den im Netzwerk vorgegebenen Kantengewichten ab. Die Gewichtung von Kanten ist der Kernpunkt des neuronalen Ansatzes: In Analogie zum biologischen Urbild (Dendrit mit Synapsen) wächst das Gewicht der Kante mit jeder Aktivierung während einer Trainingsphase. Legt man in dieser Phase zwei Inputvektoren, z.B. Dokumentvektor und Ouery gleichzeitig mit dem Relevanzurteil als Wert des Outputknoten an, verteilen sich durch den BackpropagationProzess die Gewichte entlang der Pfade, die zwischen den beteiligten Knoten bestehen. Da alle Knoten miteinander verbunden sind, entstehen nach mehreren Trainingsbeispielen bereits deutlich unterschiedliche Kantengewichte, weil die aktiv beteiligten Kanten die Änderungen akkumulativ speichern. Eine Variation des Verfahrens benutzt das NN als "Transformationsnetzwerk", wobei die beiden Inputvektoren mit einer Dokumentrepräsentation und einem dazugehörigen Indexat (von einem Experten bereitgestellt) belegt werden. Neben der schon aufgezeigten Trainingsnotwendigkeit weisen die Neuronalen Netze eine weitere intrinsische Problematik auf: Je mehr äußere Knoten benötigt werden, desto mehr interne Kanten (und bei der Verwendung von Zwischenschichten auch Knoten) sind zu verwalten, deren Anzahl nicht linear wächst. Dieser algorithmische Befund setzt naiven Einsätzen der NN-Modelle in der Praxis schnell Grenzen, deshalb ist es umso verdienstvoller, dass der Autor einen innovativen Weg zur Lösung des Problems mit den Mitteln des IR vorschlagen kann. Er verwendet das Latent Semantic Indexing, welches Dokumentrepräsentationen aus einem hochdimensionalen Vektorraum in einen niederdimensionalen abbildet, um die Anzahl der Knoten deutlich zu reduzieren. Damit ist eine sehr schöne Synthese gelungen, welche die eingangs angedeuteten formalen Übereinstimmungen zwischen Vektorraummodellen im IR und den NN aufzeigt und ausnutzt.