Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004)
0.24
0.24411577 = product of:
0.6238514 = sum of:
0.037432037 = product of:
0.11229611 = sum of:
0.11229611 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.33333334 = coord(1/3)
0.11229611 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.11229611 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.11229611 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.11229611 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.0066664745 = product of:
0.013332949 = sum of:
0.013332949 = weight(_text_:1 in 562) [ClassicSimilarity], result of:
0.013332949 = score(doc=562,freq=4.0), product of:
0.057894554 = queryWeight, product of:
2.4565027 = idf(docFreq=10304, maxDocs=44218)
0.023567878 = queryNorm
0.23029712 = fieldWeight in 562, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
2.4565027 = idf(docFreq=10304, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.5 = coord(1/2)
0.11229611 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
0.11229611 = score(doc=562,freq=2.0), product of:
0.19980873 = queryWeight, product of:
8.478011 = idf(docFreq=24, maxDocs=44218)
0.023567878 = queryNorm
0.56201804 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
8.478011 = idf(docFreq=24, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.008692958 = product of:
0.017385917 = sum of:
0.017385917 = weight(_text_:international in 562) [ClassicSimilarity], result of:
0.017385917 = score(doc=562,freq=2.0), product of:
0.078619614 = queryWeight, product of:
3.33588 = idf(docFreq=4276, maxDocs=44218)
0.023567878 = queryNorm
0.22113968 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.33588 = idf(docFreq=4276, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.5 = coord(1/2)
0.009579366 = product of:
0.019158732 = sum of:
0.019158732 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
0.019158732 = score(doc=562,freq=2.0), product of:
0.08253069 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.023567878 = queryNorm
0.23214069 = fieldWeight in 562, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.046875 = fieldNorm(doc=562)
0.5 = coord(1/2)
0.39130434 = coord(9/23)
- Content
- Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
- Date
- 8. 1.2013 10:22:32
- Source
- Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
Frobese, D.T.: Klassifikationsaufgaben mit der SENTRAX : Konkreter Fall: Automatische Detektion von SPAM (2006)
0.00
0.0029886598 = product of:
0.034369588 = sum of:
0.017723909 = weight(_text_:und in 5980) [ClassicSimilarity], result of:
0.017723909 = score(doc=5980,freq=6.0), product of:
0.052235067 = queryWeight, product of:
2.216367 = idf(docFreq=13101, maxDocs=44218)
0.023567878 = queryNorm
0.33931053 = fieldWeight in 5980, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
2.216367 = idf(docFreq=13101, maxDocs=44218)
0.0625 = fieldNorm(doc=5980)
0.016645677 = weight(_text_:im in 5980) [ClassicSimilarity], result of:
0.016645677 = score(doc=5980,freq=2.0), product of:
0.066621356 = queryWeight, product of:
2.8267863 = idf(docFreq=7115, maxDocs=44218)
0.023567878 = queryNorm
0.24985497 = fieldWeight in 5980, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
2.8267863 = idf(docFreq=7115, maxDocs=44218)
0.0625 = fieldNorm(doc=5980)
0.08695652 = coord(2/23)
- Abstract
- Die Suchfunktionen des SENTRAX-Verfahrens werden für die Klassifizierung von Mails und im Besonderen für die Detektion von SPAM eingesetzt. Die Eigenschaften einer kontextähnlichen Suche und die Fehlertoleranz sollen genutzt werden, um SPAM Nachrichten treffsicher aufzuspüren.
- Footnote
- Beitrag der Proceedings des Fünften Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2006), Hildesheim, xx.x.2006.