Search (13 results, page 1 of 1)

  • × author_ss:"Frieder, O."
  1. Lundquist, C.; Frieder, O.; Holmes, D.O.; Grossman, D.: ¬A parallel relational database management system approach to relevance feedback in information retrieval (1999) 0.02
    0.018879453 = product of:
      0.06607808 = sum of:
        0.036632486 = product of:
          0.09158121 = sum of:
            0.0439427 = weight(_text_:retrieval in 4303) [ClassicSimilarity], result of:
              0.0439427 = score(doc=4303,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 4303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4303)
            0.047638513 = weight(_text_:system in 4303) [ClassicSimilarity], result of:
              0.047638513 = score(doc=4303,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.41757566 = fieldWeight in 4303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4303)
          0.4 = coord(2/5)
        0.0294456 = product of:
          0.0588912 = sum of:
            0.0588912 = weight(_text_:22 in 4303) [ClassicSimilarity], result of:
              0.0588912 = score(doc=4303,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46428138 = fieldWeight in 4303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4303)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    17. 1.2000 12:22:18
  2. Aqeel, S.U.; Beitzel, S.M.; Jensen, E.C.; Grossman, D.; Frieder, O.: On the development of name search techniques for Arabic (2006) 0.01
    0.00546202 = product of:
      0.01911707 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 5289) [ClassicSimilarity], result of:
              0.02197135 = score(doc=5289,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 5289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5289)
          0.2 = coord(1/5)
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 5289) [ClassicSimilarity], result of:
              0.0294456 = score(doc=5289,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 5289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5289)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The need for effective identity matching systems has led to extensive research in the area of name search. For the most part, such work has been limited to English and other Latin-based languages. Consequently, algorithms such as Soundex and n-gram matching are of limited utility for languages such as Arabic, which has vastly different morphologic features that rely heavily on phonetic information. The dearth of work in this field is partly caused by the lack of standardized test data. Consequently, we have built a collection of 7,939 Arabic names, along with 50 training queries and 111 test queries. We use this collection to evaluate a variety of algorithms, including a derivative of Soundex tailored to Arabic (ASOUNDEX), measuring effectiveness by using standard information retrieval measures. Our results show an improvement of 70% over existing approaches.
    Date
    22. 7.2006 17:20:20
  3. Fox, K.L.; Frieder, O.; Knepper, M.M.; Snowberg, E.J.: SENTINEL: a multiple engine information retrieval and visualization system (1999) 0.01
    0.0051752143 = product of:
      0.0362265 = sum of:
        0.0362265 = product of:
          0.09056625 = sum of:
            0.051266484 = weight(_text_:retrieval in 3547) [ClassicSimilarity], result of:
              0.051266484 = score(doc=3547,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46789268 = fieldWeight in 3547, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3547)
            0.039299767 = weight(_text_:system in 3547) [ClassicSimilarity], result of:
              0.039299767 = score(doc=3547,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.34448233 = fieldWeight in 3547, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3547)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We describe a prototype Information Retrieval system; SENTINEL, under development at Harris Corporation's Information Systems Division. SENTINEL is a fusion of multiple information retrieval technologies, integrating n-grams, a vector space model, and a neural network training rule. One of the primary advantages of SENTINEL is its 3-dimensional visualization capability that is based fully upon the mathematical representation of information with SENTINEL. The 3-dimensional visualization capability provides users with an intuitive understanding, with relevance/query refinement techniques athat can be better utilized, resulting in higher retrieval precision
  4. Aljlayl, M.; Frieder, O.; Grossman, D.: On bidirectional English-Arabic search (2002) 0.00
    0.002650327 = product of:
      0.018552288 = sum of:
        0.018552288 = product of:
          0.04638072 = sum of:
            0.01830946 = weight(_text_:retrieval in 5227) [ClassicSimilarity], result of:
              0.01830946 = score(doc=5227,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 5227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5227)
            0.028071264 = weight(_text_:system in 5227) [ClassicSimilarity], result of:
              0.028071264 = score(doc=5227,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.24605882 = fieldWeight in 5227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5227)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Aljlayl, Frieder, and Grossman review machine translation of query methodologies and apply them to English-Arabic/Arabic-English Cross-Language Information Retrieval. In the dictionary method, replacement of each term with all possible equivalents in the target language results in considerable ambiguity, while taking the first term in the dictionary list reduces the ambiguity but may fail to capture the meaning. A Two-Phase method takes all possible equivalents and translates them back, retaining only those that generate the original term. It results in an average query length of six terms in TREC7 and 12 in TREC9. Arabic to English translations consistently preformed below the original English queries, and the Two-Phase method consistently preformed at the highest level and significantly better than the Every-Match method. Machine translation using other techniques is economical for queries but not likely so for documents. Using ALKAFI, a commercial translation system from Arabic to English and the Al-Mutarjim Al-Arabey system for English to Arabic, nearly 60% of monolingual retrievals were generated going from Arabic to English. Smaller numbers of terms in the source query improve performance, and these systems require syntactically well-formed queries for good performance.
  5. Urbain, J.; Goharian, N.; Frieder, O.: Probabilistic passage models for semantic search of genomics literature (2008) 0.00
    0.0015693823 = product of:
      0.010985675 = sum of:
        0.010985675 = product of:
          0.054928374 = sum of:
            0.054928374 = weight(_text_:retrieval in 2380) [ClassicSimilarity], result of:
              0.054928374 = score(doc=2380,freq=18.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.50131357 = fieldWeight in 2380, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2380)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We explore unsupervised learning techniques for extracting semantic information about biomedical concepts and topics, and introduce a passage retrieval model for using these semantics in context to improve genomics literature search. Our contributions include a new passage retrieval model based on an undirected graphical model (Markov Random Fields), and new methods for modeling passage-concepts, document-topics, and passage-terms as potential functions within the model. Each potential function includes distributional evidence to disambiguate topics, concepts, and terms in context. The joint distribution across potential functions in the graph represents the probability of a passage being relevant to a biologist's information need. Relevance ranking within each potential function simplifies normalization across potential functions and eliminates the need for tuning of passage retrieval model parameters. Our dimensional indexing model facilitates efficient aggregation of topic, concept, and term distributions. The proposed passage-retrieval model improves search results in the presence of varying levels of semantic evidence, outperforming models of query terms, concepts, or document topics alone. Our results exceed the state-of-the-art for automatic document retrieval by 14.46% (0.3554 vs. 0.3105) and passage retrieval by 15.57% (0.1128 vs. 0.0976) as assessed by the TREC 2007 Genomics Track, and automatic document retrieval by 18.56% (0.3424 vs. 0.2888) as assessed by the TREC 2005 Genomics Track. Automatic document retrieval results for TREC 2007 and TREC 2005 are statistically significant at the 95% confidence level (p = .0359 and .0253, respectively). Passage retrieval is significant at the 90% confidence level (p = 0.0893).
  6. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (2004) 0.00
    0.0015089302 = product of:
      0.010562511 = sum of:
        0.010562511 = product of:
          0.052812554 = sum of:
            0.052812554 = weight(_text_:retrieval in 1486) [ClassicSimilarity], result of:
              0.052812554 = score(doc=1486,freq=26.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.48200315 = fieldWeight in 1486, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1486)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Interested in how an efficient search engine works? Want to know what algorithms are used to rank resulting documents in response to user requests? The authors answer these and other key information on retrieval design and implementation questions is provided. This book is not yet another high level text. Instead, algorithms are thoroughly described, making this book ideally suited for both computer science students and practitioners who work on search-related applications. As stated in the foreword, this book provides a current, broad, and detailed overview of the field and is the only one that does so. Examples are used throughout to illustrate the algorithms. The authors explain how a query is ranked against a document collection using either a single or a combination of retrieval strategies, and how an assortment of utilities are integrated into the query processing scheme to improve these rankings. Methods for building and compressing text indexes, querying and retrieving documents in multiple languages, and using parallel or distributed processing to expedite the search are likewise described. This edition is a major expansion of the one published in 1998. Neuaufl. 2005: Besides updating the entire book with current techniques, it includes new sections on language models, cross-language information retrieval, peer-to-peer processing, XML search, mediators, and duplicate document detection.
    LCSH
    Information storage and retrieval systems
    RSWK
    Algorithmus / Heuristik / Information Retrieval
    Information Retrieval / Theoretische Informatik (HBZ)
    Information Retrieval (BVB)
    Series
    Kluwer international series on information retrieval ; 15
    Subject
    Algorithmus / Heuristik / Information Retrieval
    Information Retrieval / Theoretische Informatik (HBZ)
    Information Retrieval (BVB)
    Information storage and retrieval systems
  7. Grossman, D.A.; Holmes, D.O.; Frieder, O.; Nguyen, M.D.; Kingsbury, C.E.: Improving accuracy and run-time performance for TREC-4 (1996) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 7531) [ClassicSimilarity], result of:
              0.0439427 = score(doc=7531,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 7531, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7531)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  8. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (1998) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 2182) [ClassicSimilarity], result of:
              0.0439427 = score(doc=2182,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 2182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2182)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  9. Lundquist, C.; Grossmann, D.A.; Reichart, J.; Holmes, D.O.; Chowdhury, A.; Frieder, O.: Using relevance feedback within the relational model for TREC-5 (1997) 0.00
    0.0010462549 = product of:
      0.007323784 = sum of:
        0.007323784 = product of:
          0.03661892 = sum of:
            0.03661892 = weight(_text_:retrieval in 3093) [ClassicSimilarity], result of:
              0.03661892 = score(doc=3093,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 3093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3093)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  10. Yee, W.G.; Nguyen, L.T; Frieder, O.: ¬A view of the data on P2P file-sharing systems (2009) 0.00
    9.0740033E-4 = product of:
      0.006351802 = sum of:
        0.006351802 = product of:
          0.03175901 = sum of:
            0.03175901 = weight(_text_:system in 3118) [ClassicSimilarity], result of:
              0.03175901 = score(doc=3118,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 3118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3118)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Peer-to-peer (P2P) file sharing is a leading Internet application. Millions of users use P2P file-sharing systems daily to search for and download files, accounting for a large portion of Internet traffic. Due to their scale, it is important to fully understand how these systems work. We analyze user queries and shared files collected on the Gnutella system, draw some conclusions on the nature of the application, and propose some research problems.
  11. Soo, J.; Frieder, O.: On searching misspelled collections (2015) 0.00
    9.0740033E-4 = product of:
      0.006351802 = sum of:
        0.006351802 = product of:
          0.03175901 = sum of:
            0.03175901 = weight(_text_:system in 1862) [ClassicSimilarity], result of:
              0.03175901 = score(doc=1862,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.27838376 = fieldWeight in 1862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1862)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We describe an unsupervised, language-independent spelling correction search system. We compare the proposed approach with unsupervised and supervised algorithms. The described approach consistently outperforms other unsupervised efforts and nearly matches the performance of a current state-of-the-art supervised approach.
  12. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Frieder, O.; Grossman, D.: Temporal analysis of a very large topically categorized Web query log (2007) 0.00
    5.2312744E-4 = product of:
      0.003661892 = sum of:
        0.003661892 = product of:
          0.01830946 = sum of:
            0.01830946 = weight(_text_:retrieval in 60) [ClassicSimilarity], result of:
              0.01830946 = score(doc=60,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 60, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=60)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The authors review a log of billions of Web queries that constituted the total query traffic for a 6-month period of a general-purpose commercial Web search service. Previously, query logs were studied from a single, cumulative view. In contrast, this study builds on the authors' previous work, which showed changes in popularity and uniqueness of topically categorized queries across the hours in a day. To further their analysis, they examine query traffic on a daily, weekly, and monthly basis by matching it against lists of queries that have been topically precategorized by human editors. These lists represent 13% of the query traffic. They show that query traffic from particular topical categories differs both from the query stream as a whole and from other categories. Additionally, they show that certain categories of queries trend differently over varying periods. The authors key contribution is twofold: They outline a method for studying both the static and topical properties of a very large query log over varying periods, and they identify and examine topical trends that may provide valuable insight for improving both retrieval effectiveness and efficiency.
  13. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.00
    5.2312744E-4 = product of:
      0.003661892 = sum of:
        0.003661892 = product of:
          0.01830946 = sum of:
            0.01830946 = weight(_text_:retrieval in 448) [ClassicSimilarity], result of:
              0.01830946 = score(doc=448,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.