Search (48 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.11
    0.11132976 = product of:
      0.16699463 = sum of:
        0.06022381 = weight(_text_:resources in 4261) [ClassicSimilarity], result of:
          0.06022381 = score(doc=4261,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.32264733 = fieldWeight in 4261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0625 = fieldNorm(doc=4261)
        0.10677081 = sum of:
          0.05134755 = weight(_text_:management in 4261) [ClassicSimilarity], result of:
            0.05134755 = score(doc=4261,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.29792285 = fieldWeight in 4261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
          0.055423267 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
            0.055423267 = score(doc=4261,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.30952093 = fieldWeight in 4261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
      0.6666667 = coord(2/3)
    
    Date
    17. 7.2002 19:22:06
    Theme
    Information Resources Management
  2. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.06
    0.057382297 = product of:
      0.08607344 = sum of:
        0.031938497 = weight(_text_:resources in 1833) [ClassicSimilarity], result of:
          0.031938497 = score(doc=1833,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.17110959 = fieldWeight in 1833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.054134943 = sum of:
          0.033351216 = weight(_text_:management in 1833) [ClassicSimilarity], result of:
            0.033351216 = score(doc=1833,freq=6.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.19350658 = fieldWeight in 1833, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.020783724 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.020783724 = score(doc=1833,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.6666667 = coord(2/3)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Date
    11. 5.2008 19:49:22
    LCSH
    Mass media / Archival resources / Congresses
    Information technology / Management / Congresses
    Subject
    Mass media / Archival resources / Congresses
    Information technology / Management / Congresses
  3. Data mining : Theoretische Aspekte und Anwendungen (1998) 0.06
    0.05726506 = product of:
      0.08589759 = sum of:
        0.06022381 = weight(_text_:resources in 966) [ClassicSimilarity], result of:
          0.06022381 = score(doc=966,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.32264733 = fieldWeight in 966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0625 = fieldNorm(doc=966)
        0.025673775 = product of:
          0.05134755 = sum of:
            0.05134755 = weight(_text_:management in 966) [ClassicSimilarity], result of:
              0.05134755 = score(doc=966,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.29792285 = fieldWeight in 966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=966)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Information Resources Management
  4. Analytische Informationssysteme : Data Warehouse, On-Line Analytical Processing, Data Mining (1998) 0.06
    0.05726506 = product of:
      0.08589759 = sum of:
        0.06022381 = weight(_text_:resources in 1380) [ClassicSimilarity], result of:
          0.06022381 = score(doc=1380,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.32264733 = fieldWeight in 1380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0625 = fieldNorm(doc=1380)
        0.025673775 = product of:
          0.05134755 = sum of:
            0.05134755 = weight(_text_:management in 1380) [ClassicSimilarity], result of:
              0.05134755 = score(doc=1380,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.29792285 = fieldWeight in 1380, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1380)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Information Resources Management
  5. Relational data mining (2001) 0.06
    0.055421554 = product of:
      0.08313233 = sum of:
        0.063876994 = weight(_text_:resources in 1303) [ClassicSimilarity], result of:
          0.063876994 = score(doc=1303,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.34221917 = fieldWeight in 1303, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=1303)
        0.01925533 = product of:
          0.03851066 = sum of:
            0.03851066 = weight(_text_:management in 1303) [ClassicSimilarity], result of:
              0.03851066 = score(doc=1303,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.22344214 = fieldWeight in 1303, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1303)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As the first book devoted to relational data mining, this coherently written multi-author monograph provides a thorough introduction and systematic overview of the area. The ferst part introduces the reader to the basics and principles of classical knowledge discovery in databases and inductive logic programmeng; subsequent chapters by leading experts assess the techniques in relational data mining in a principled and comprehensive way; finally, three chapters deal with advanced applications in various fields and refer the reader to resources for relational data mining. This book will become a valuable source of reference for R&D professionals active in relational data mining. Students as well as IT professionals and ambitioned practitioners interested in learning about relational data mining will appreciate the book as a useful text and gentle introduction to this exciting new field.
    Theme
    Information Resources Management
  6. Analytische Informationssysteme : Data Warehouse, On-Line Analytical Processing, Data Mining (1999) 0.05
    0.050106924 = product of:
      0.075160384 = sum of:
        0.052695833 = weight(_text_:resources in 1381) [ClassicSimilarity], result of:
          0.052695833 = score(doc=1381,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28231642 = fieldWeight in 1381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1381)
        0.022464553 = product of:
          0.044929106 = sum of:
            0.044929106 = weight(_text_:management in 1381) [ClassicSimilarity], result of:
              0.044929106 = score(doc=1381,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2606825 = fieldWeight in 1381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1381)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Theme
    Information Resources Management
  7. Liu, Y.; Huang, X.; An, A.: Personalized recommendation with adaptive mixture of markov models (2007) 0.04
    0.035790663 = product of:
      0.053685993 = sum of:
        0.037639882 = weight(_text_:resources in 606) [ClassicSimilarity], result of:
          0.037639882 = score(doc=606,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
        0.016046109 = product of:
          0.032092217 = sum of:
            0.032092217 = weight(_text_:management in 606) [ClassicSimilarity], result of:
              0.032092217 = score(doc=606,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.18620178 = fieldWeight in 606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With more and more information available on the Internet, the task of making personalized recommendations to assist the user's navigation has become increasingly important. Considering there might be millions of users with different backgrounds accessing a Web site everyday, it is infeasible to build a separate recommendation system for each user. To address this problem, clustering techniques can first be employed to discover user groups. Then, user navigation patterns for each group can be discovered, to allow the adaptation of a Web site to the interest of each individual group. In this paper, we propose to model user access sequences as stochastic processes, and a mixture of Markov models based approach is taken to cluster users and to capture the sequential relationships inherent in user access histories. Several important issues that arise in constructing the Markov models are also addressed. The first issue lies in the complexity of the mixture of Markov models. To improve the efficiency of building/maintaining the mixture of Markov models, we develop a lightweight adapt-ive algorithm to update the model parameters without recomputing model parameters from scratch. The second issue concerns the proper selection of training data for building the mixture of Markov models. We investigate two different training data selection strategies and perform extensive experiments to compare their effectiveness on a real dataset that is generated by a Web-based knowledge management system, Livelink.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  8. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.03
    0.026692703 = product of:
      0.08007811 = sum of:
        0.08007811 = sum of:
          0.03851066 = weight(_text_:management in 1383) [ClassicSimilarity], result of:
            0.03851066 = score(doc=1383,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.22344214 = fieldWeight in 1383, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.046875 = fieldNorm(doc=1383)
          0.04156745 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
            0.04156745 = score(doc=1383,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.23214069 = fieldWeight in 1383, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1383)
      0.33333334 = coord(1/3)
    
    Abstract
    Das Buch richtet sich an Praktiker in Unternehmen, die sich mit der Analyse von großen Datenbeständen beschäftigen. Nach einem kurzen Theorieteil werden vier Fallstudien aus dem Customer Relationship Management eines Versandhändlers bearbeitet. Dabei wurden acht führende Softwarelösungen verwendet: der Intelligent Miner von IBM, der Enterprise Miner von SAS, Clementine von SPSS, Knowledge Studio von Angoss, der Delta Miner von Bissantz, der Business Miner von Business Object und die Data Engine von MIT. Im Rahmen der Fallstudien werden die Stärken und Schwächen der einzelnen Lösungen deutlich, und die methodisch-korrekte Vorgehensweise beim Data Mining wird aufgezeigt. Beides liefert wertvolle Entscheidungshilfen für die Auswahl von Standardsoftware zum Data Mining und für die praktische Datenanalyse.
    Date
    22. 3.2008 14:46:06
  9. Lam, W.; Yang, C.C.; Menczer, F.: Introduction to the special topic section on mining Web resources for enhancing information retrieval (2007) 0.02
    0.024841055 = product of:
      0.074523166 = sum of:
        0.074523166 = weight(_text_:resources in 600) [ClassicSimilarity], result of:
          0.074523166 = score(doc=600,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.39925572 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
      0.33333334 = coord(1/3)
    
    Footnote
    Einführung in einen Themenschwerpunkt "Mining Web resources for enhancing information retrieval"
  10. Derek Doran, D.; Gokhale, S.S.: ¬A classification framework for web robots (2012) 0.02
    0.020074604 = product of:
      0.06022381 = sum of:
        0.06022381 = weight(_text_:resources in 505) [ClassicSimilarity], result of:
          0.06022381 = score(doc=505,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.32264733 = fieldWeight in 505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0625 = fieldNorm(doc=505)
      0.33333334 = coord(1/3)
    
    Abstract
    The behavior of modern web robots varies widely when they crawl for different purposes. In this article, we present a framework to classify these web robots from two orthogonal perspectives, namely, their functionality and the types of resources they consume. Applying the classification framework to a year-long access log from the UConn SoE web server, we present trends that point to significant differences in their crawling behavior.
  11. Wang, F.L.; Yang, C.C.: Mining Web data for Chinese segmentation (2007) 0.02
    0.017743612 = product of:
      0.053230833 = sum of:
        0.053230833 = weight(_text_:resources in 604) [ClassicSimilarity], result of:
          0.053230833 = score(doc=604,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28518265 = fieldWeight in 604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=604)
      0.33333334 = coord(1/3)
    
    Abstract
    Modern information retrieval systems use keywords within documents as indexing terms for search of relevant documents. As Chinese is an ideographic character-based language, the words in the texts are not delimited by white spaces. Indexing of Chinese documents is impossible without a proper segmentation algorithm. Many Chinese segmentation algorithms have been proposed in the past. Traditional segmentation algorithms cannot operate without a large dictionary or a large corpus of training data. Nowadays, the Web has become the largest corpus that is ideal for Chinese segmentation. Although most search engines have problems in segmenting texts into proper words, they maintain huge databases of documents and frequencies of character sequences in the documents. Their databases are important potential resources for segmentation. In this paper, we propose a segmentation algorithm by mining Web data with the help of search engines. On the other hand, the Romanized pinyin of Chinese language indicates boundaries of words in the text. Our algorithm is the first to utilize the Romanized pinyin to segmentation. It is the first unified segmentation algorithm for the Chinese language from different geographical areas, and it is also domain independent because of the nature of the Web. Experiments have been conducted on the datasets of a recent Chinese segmentation competition. The results show that our algorithm outperforms the traditional algorithms in terms of precision and recall. Moreover, our algorithm can effectively deal with the problems of segmentation ambiguity, new word (unknown word) detection, and stop words.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  12. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.02
    0.017743612 = product of:
      0.053230833 = sum of:
        0.053230833 = weight(_text_:resources in 607) [ClassicSimilarity], result of:
          0.053230833 = score(doc=607,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28518265 = fieldWeight in 607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
      0.33333334 = coord(1/3)
    
    Abstract
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  13. Baeza-Yates, R.; Hurtado, C.; Mendoza, M.: Improving search engines by query clustering (2007) 0.02
    0.017565278 = product of:
      0.052695833 = sum of:
        0.052695833 = weight(_text_:resources in 601) [ClassicSimilarity], result of:
          0.052695833 = score(doc=601,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.28231642 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0546875 = fieldNorm(doc=601)
      0.33333334 = coord(1/3)
    
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  14. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.02
    0.01616512 = product of:
      0.04849536 = sum of:
        0.04849536 = product of:
          0.09699072 = sum of:
            0.09699072 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.09699072 = score(doc=4577,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    2. 4.2000 18:01:22
  15. Galal, G.M.; Cook, D.J.; Holder, L.B.: Exploiting parallelism in a structural scientific discovery system to improve scalability (1999) 0.02
    0.015055953 = product of:
      0.045167856 = sum of:
        0.045167856 = weight(_text_:resources in 2952) [ClassicSimilarity], result of:
          0.045167856 = score(doc=2952,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2419855 = fieldWeight in 2952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=2952)
      0.33333334 = coord(1/3)
    
    Abstract
    The large amount of data collected today is quickly overwhelming researchers' abilities to interpret the data and discover interesting patterns. Knowledge discovery and data mining approaches hold the potential to automate the interpretation process, but these approaches frequently utilize computationally expensive algorithms. In particular, scientific discovery systems focus on the utilization of richer data representation, sometimes without regard for scalability. This research investigates approaches for scaling a particular knowledge discovery in databases (KDD) system, SUBDUE, using parallel and distributed resources. SUBDUE has been used to discover interesting and repetitive concepts in graph-based databases from a variety of domains, but requires a substantial amount of processing time. Experiments that demonstrate scalability of parallel versions of the SUBDUE system are performed using CAD circuit databases and artificially-generated databases, and potential achievements and obstacles are discussed
  16. Perugini, S.; Ramakrishnan, N.: Mining Web functional dependencies for flexible information access (2007) 0.02
    0.015055953 = product of:
      0.045167856 = sum of:
        0.045167856 = weight(_text_:resources in 602) [ClassicSimilarity], result of:
          0.045167856 = score(doc=602,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2419855 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
      0.33333334 = coord(1/3)
    
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  17. Sun, X.; Lin, H.: Topical community detection from mining user tagging behavior and interest (2013) 0.02
    0.015055953 = product of:
      0.045167856 = sum of:
        0.045167856 = weight(_text_:resources in 605) [ClassicSimilarity], result of:
          0.045167856 = score(doc=605,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.2419855 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.046875 = fieldNorm(doc=605)
      0.33333334 = coord(1/3)
    
    Abstract
    With the development of Web2.0, social tagging systems in which users can freely choose tags to annotate resources according to their interests have attracted much attention. In particular, literature on the emergence of collective intelligence in social tagging systems has increased. In this article, we propose a probabilistic generative model to detect latent topical communities among users. Social tags and resource contents are leveraged to model user interest in two similar and correlated ways. Our primary goal is to capture user tagging behavior and interest and discover the emergent topical community structure. The communities should be groups of users with frequent social interactions as well as similar topical interests, which would have important research implications for personalized information services. Experimental results on two real social tagging data sets with different genres have shown that the proposed generative model more accurately models user interest and detects high-quality and meaningful topical communities.
  18. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.01
    0.014194889 = product of:
      0.042584665 = sum of:
        0.042584665 = weight(_text_:resources in 433) [ClassicSimilarity], result of:
          0.042584665 = score(doc=433,freq=4.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.22814612 = fieldWeight in 433, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.03125 = fieldNorm(doc=433)
      0.33333334 = coord(1/3)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
  19. KDD : techniques and applications (1998) 0.01
    0.013855817 = product of:
      0.04156745 = sum of:
        0.04156745 = product of:
          0.0831349 = sum of:
            0.0831349 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.0831349 = score(doc=6783,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  20. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.01
    0.013346352 = product of:
      0.040039055 = sum of:
        0.040039055 = sum of:
          0.01925533 = weight(_text_:management in 1178) [ClassicSimilarity], result of:
            0.01925533 = score(doc=1178,freq=2.0), product of:
              0.17235184 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.051133685 = queryNorm
              0.11172107 = fieldWeight in 1178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1178)
          0.020783724 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
            0.020783724 = score(doc=1178,freq=2.0), product of:
              0.17906146 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051133685 = queryNorm
              0.116070345 = fieldWeight in 1178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1178)
      0.33333334 = coord(1/3)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.

Languages

  • e 38
  • d 10

Types

  • a 35
  • m 11
  • s 11
  • el 2
  • More… Less…