-
Aqeel, S.U.; Beitzel, S.M.; Jensen, E.C.; Grossman, D.; Frieder, O.: On the development of name search techniques for Arabic (2006)
0.05
0.046260543 = product of:
0.09252109 = sum of:
0.09252109 = sum of:
0.05574795 = weight(_text_:n in 5289) [ClassicSimilarity], result of:
0.05574795 = score(doc=5289,freq=2.0), product of:
0.19504215 = queryWeight, product of:
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.045236014 = queryNorm
0.28582513 = fieldWeight in 5289, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.046875 = fieldNorm(doc=5289)
0.036773134 = weight(_text_:22 in 5289) [ClassicSimilarity], result of:
0.036773134 = score(doc=5289,freq=2.0), product of:
0.15840882 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.045236014 = queryNorm
0.23214069 = fieldWeight in 5289, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=5289)
0.5 = coord(1/2)
- Abstract
- The need for effective identity matching systems has led to extensive research in the area of name search. For the most part, such work has been limited to English and other Latin-based languages. Consequently, algorithms such as Soundex and n-gram matching are of limited utility for languages such as Arabic, which has vastly different morphologic features that rely heavily on phonetic information. The dearth of work in this field is partly caused by the lack of standardized test data. Consequently, we have built a collection of 7,939 Arabic names, along with 50 training queries and 111 test queries. We use this collection to evaluate a variety of algorithms, including a derivative of Soundex tailored to Arabic (ASOUNDEX), measuring effectiveness by using standard information retrieval measures. Our results show an improvement of 70% over existing approaches.
- Date
- 22. 7.2006 17:20:20
-
Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007)
0.02
0.020116309 = product of:
0.040232617 = sum of:
0.040232617 = product of:
0.080465235 = sum of:
0.080465235 = weight(_text_:n in 448) [ClassicSimilarity], result of:
0.080465235 = score(doc=448,freq=6.0), product of:
0.19504215 = queryWeight, product of:
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.045236014 = queryNorm
0.41255307 = fieldWeight in 448, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.0390625 = fieldNorm(doc=448)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
-
Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Grossman, D.; Frieder, O; Goharian, N.: Fusion of effective retrieval strategies in the same information retrieval system (2004)
0.01
0.013936987 = product of:
0.027873974 = sum of:
0.027873974 = product of:
0.05574795 = sum of:
0.05574795 = weight(_text_:n in 2502) [ClassicSimilarity], result of:
0.05574795 = score(doc=2502,freq=2.0), product of:
0.19504215 = queryWeight, product of:
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.045236014 = queryNorm
0.28582513 = fieldWeight in 2502, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.046875 = fieldNorm(doc=2502)
0.5 = coord(1/2)
0.5 = coord(1/2)