Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007)
0.02
0.022865495 = product of:
0.11432747 = sum of:
0.11432747 = weight(_text_:o in 448) [ClassicSimilarity], result of:
0.11432747 = score(doc=448,freq=8.0), product of:
0.20624171 = queryWeight, product of:
5.017288 = idf(docFreq=795, maxDocs=44218)
0.041106213 = queryNorm
0.55433726 = fieldWeight in 448, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
5.017288 = idf(docFreq=795, maxDocs=44218)
0.0390625 = fieldNorm(doc=448)
0.2 = coord(1/5)
- Abstract
- A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.