Search (47 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.02
    0.0151066985 = product of:
      0.060426794 = sum of:
        0.012262309 = weight(_text_:information in 607) [ClassicSimilarity], result of:
          0.012262309 = score(doc=607,freq=8.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.19395474 = fieldWeight in 607, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
        0.048164483 = weight(_text_:retrieval in 607) [ClassicSimilarity], result of:
          0.048164483 = score(doc=607,freq=14.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.442117 = fieldWeight in 607, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
      0.25 = coord(2/8)
    
    Abstract
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1884-1898
  2. Lam, W.; Yang, C.C.; Menczer, F.: Introduction to the special topic section on mining Web resources for enhancing information retrieval (2007) 0.01
    0.0142671205 = product of:
      0.057068482 = sum of:
        0.02102548 = weight(_text_:information in 600) [ClassicSimilarity], result of:
          0.02102548 = score(doc=600,freq=12.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3325631 = fieldWeight in 600, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
        0.036043 = weight(_text_:retrieval in 600) [ClassicSimilarity], result of:
          0.036043 = score(doc=600,freq=4.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.33085006 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
      0.25 = coord(2/8)
    
    Abstract
    The amount of information on the Web has been expanding at an enormous pace. There are a variety of Web documents in different genres, such as news, reports, reviews. Traditionally, the information displayed on Web sites has been static. Recently, there are many Web sites offering content that is dynamically generated and frequently updated. It is also common for Web sites to contain information in different languages since many countries adopt more than one language. Moreover, content may exist in multimedia formats including text, images, video, and audio.
    Footnote
    Einführung in einen Themenschwerpunkt "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1791-1792
  3. Sánchez, D.; Chamorro-Martínez, J.; Vila, M.A.: Modelling subjectivity in visual perception of orientation for image retrieval (2003) 0.01
    0.012762025 = product of:
      0.0510481 = sum of:
        0.0073573855 = weight(_text_:information in 1067) [ClassicSimilarity], result of:
          0.0073573855 = score(doc=1067,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.116372846 = fieldWeight in 1067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
        0.043690715 = weight(_text_:retrieval in 1067) [ClassicSimilarity], result of:
          0.043690715 = score(doc=1067,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.40105087 = fieldWeight in 1067, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
      0.25 = coord(2/8)
    
    Abstract
    In this paper we combine computer vision and data mining techniques to model high-level concepts for image retrieval, on the basis of basic perceptual features of the human visual system. High-level concepts related to these features are learned and represented by means of a set of fuzzy association rules. The concepts so acquired can be used for image retrieval with the advantage that it is not needed to provide an image as a query. Instead, a query is formulated by using the labels that identify the learned concepts as search terms, and the retrieval process calculates the relevance of an image to the query by an inference mechanism. An additional feature of our methodology is that it can capture user's subjectivity. For that purpose, fuzzy sets theory is employed to measure user's assessments about the fulfillment of a concept by an image.
    Source
    Information processing and management. 39(2003) no.2, S.251-266
  4. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.01
    0.011938142 = product of:
      0.047752567 = sum of:
        0.01622151 = weight(_text_:information in 605) [ClassicSimilarity], result of:
          0.01622151 = score(doc=605,freq=14.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.256578 = fieldWeight in 605, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
        0.03153106 = weight(_text_:retrieval in 605) [ClassicSimilarity], result of:
          0.03153106 = score(doc=605,freq=6.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.28943354 = fieldWeight in 605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
      0.25 = coord(2/8)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1838-1850
  5. Baeza-Yates, R.; Hurtado, C.; Mendoza, M.: Improving search engines by query clustering (2007) 0.01
    0.010088378 = product of:
      0.04035351 = sum of:
        0.01486726 = weight(_text_:information in 601) [ClassicSimilarity], result of:
          0.01486726 = score(doc=601,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23515764 = fieldWeight in 601, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=601)
        0.02548625 = weight(_text_:retrieval in 601) [ClassicSimilarity], result of:
          0.02548625 = score(doc=601,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.23394634 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=601)
      0.25 = coord(2/8)
    
    Abstract
    In this paper, we present a framework for clustering Web search engine queries whose aim is to identify groups of queries used to search for similar information on the Web. The framework is based on a novel term vector model of queries that integrates user selections and the content of selected documents extracted from the logs of a search engine. The query representation obtained allows us to treat query clustering similarly to standard document clustering. We study the application of the clustering framework to two problems: relevance ranking boosting and query recommendation. Finally, we evaluate with experiments the effectiveness of our approach.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1793-1804
  6. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.01
    0.0100361835 = product of:
      0.040144734 = sum of:
        0.017167233 = weight(_text_:information in 3835) [ClassicSimilarity], result of:
          0.017167233 = score(doc=3835,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.27153665 = fieldWeight in 3835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3835)
        0.022977501 = product of:
          0.0689325 = sum of:
            0.0689325 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.0689325 = score(doc=3835,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Date
    29. 3.2002 17:31:17
  7. Perugini, S.; Ramakrishnan, N.: Mining Web functional dependencies for flexible information access (2007) 0.01
    0.009140032 = product of:
      0.03656013 = sum of:
        0.014714771 = weight(_text_:information in 602) [ClassicSimilarity], result of:
          0.014714771 = score(doc=602,freq=8.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23274569 = fieldWeight in 602, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
        0.021845357 = weight(_text_:retrieval in 602) [ClassicSimilarity], result of:
          0.021845357 = score(doc=602,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.20052543 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
      0.25 = coord(2/8)
    
    Abstract
    We present an approach to enhancing information access through Web structure mining in contrast to traditional approaches involving usage mining. Specifically, we mine the hardwired hierarchical hyperlink structure of Web sites to identify patterns of term-term co-occurrences we call Web functional dependencies (FDs). Intuitively, a Web FD x -> y declares that all paths through a site involving a hyperlink labeled x also contain a hyperlink labeled y. The complete set of FDs satisfied by a site help characterize (flexible and expressive) interaction paradigms supported by a site, where a paradigm is the set of explorable sequences therein. We describe algorithms for mining FDs and results from mining several hierarchical Web sites and present several interface designs that can exploit such FDs to provide compelling user experiences.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1805-1819
  8. Wang, F.L.; Yang, C.C.: Mining Web data for Chinese segmentation (2007) 0.01
    0.009091117 = product of:
      0.03636447 = sum of:
        0.010619472 = weight(_text_:information in 604) [ClassicSimilarity], result of:
          0.010619472 = score(doc=604,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16796975 = fieldWeight in 604, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=604)
        0.025744999 = weight(_text_:retrieval in 604) [ClassicSimilarity], result of:
          0.025744999 = score(doc=604,freq=4.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.23632148 = fieldWeight in 604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=604)
      0.25 = coord(2/8)
    
    Abstract
    Modern information retrieval systems use keywords within documents as indexing terms for search of relevant documents. As Chinese is an ideographic character-based language, the words in the texts are not delimited by white spaces. Indexing of Chinese documents is impossible without a proper segmentation algorithm. Many Chinese segmentation algorithms have been proposed in the past. Traditional segmentation algorithms cannot operate without a large dictionary or a large corpus of training data. Nowadays, the Web has become the largest corpus that is ideal for Chinese segmentation. Although most search engines have problems in segmenting texts into proper words, they maintain huge databases of documents and frequencies of character sequences in the documents. Their databases are important potential resources for segmentation. In this paper, we propose a segmentation algorithm by mining Web data with the help of search engines. On the other hand, the Romanized pinyin of Chinese language indicates boundaries of words in the text. Our algorithm is the first to utilize the Romanized pinyin to segmentation. It is the first unified segmentation algorithm for the Chinese language from different geographical areas, and it is also domain independent because of the nature of the Web. Experiments have been conducted on the datasets of a recent Chinese segmentation competition. The results show that our algorithm outperforms the traditional algorithms in terms of precision and recall. Moreover, our algorithm can effectively deal with the problems of segmentation ambiguity, new word (unknown word) detection, and stop words.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1820-1837
  9. Bath, P.A.: Data mining in health and medical information (2003) 0.01
    0.008766372 = product of:
      0.035065487 = sum of:
        0.021935485 = weight(_text_:information in 4263) [ClassicSimilarity], result of:
          0.021935485 = score(doc=4263,freq=10.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3469568 = fieldWeight in 4263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4263)
        0.013130001 = product of:
          0.03939 = sum of:
            0.03939 = weight(_text_:29 in 4263) [ClassicSimilarity], result of:
              0.03939 = score(doc=4263,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.31092256 = fieldWeight in 4263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Date
    23.10.2005 18:29:03
    Source
    Annual review of information science and technology. 38(2004), S.331-370
  10. Ohly, H.P.: Bibliometric mining : added value from document analysis and retrieval (2008) 0.01
    0.008062568 = product of:
      0.03225027 = sum of:
        0.010404914 = weight(_text_:information in 2386) [ClassicSimilarity], result of:
          0.010404914 = score(doc=2386,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16457605 = fieldWeight in 2386, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2386)
        0.021845357 = weight(_text_:retrieval in 2386) [ClassicSimilarity], result of:
          0.021845357 = score(doc=2386,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.20052543 = fieldWeight in 2386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2386)
      0.25 = coord(2/8)
    
    Abstract
    Bibliometrics is understood as statistical analysis of scientific structures and processes. The analyzed data result from information and administrative actions. The demand for quality judgments or the discovering of new structures and information means that Bibliometrics takes on the role of being exploratory and decision supporting. To the extent that it has acquired important features of Data Mining, the analysis of text and internet material can be viewed as an additional challenge. In the sense of an evaluative approach Bibliometrics can also be seen to apply inference procedures as well as navigation tools.
  11. Shi, X.; Yang, C.C.: Mining related queries from Web search engine query logs using an improved association rule mining model (2007) 0.01
    0.007978535 = product of:
      0.03191414 = sum of:
        0.013709677 = weight(_text_:information in 597) [ClassicSimilarity], result of:
          0.013709677 = score(doc=597,freq=10.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.21684799 = fieldWeight in 597, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=597)
        0.018204464 = weight(_text_:retrieval in 597) [ClassicSimilarity], result of:
          0.018204464 = score(doc=597,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.16710453 = fieldWeight in 597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=597)
      0.25 = coord(2/8)
    
    Abstract
    With the overwhelming volume of information, the task of finding relevant information on a given topic on the Web is becoming increasingly difficult. Web search engines hence become one of the most popular solutions available on the Web. However, it has never been easy for novice users to organize and represent their information needs using simple queries. Users have to keep modifying their input queries until they get expected results. Therefore, it is often desirable for search engines to give suggestions on related queries to users. Besides, by identifying those related queries, search engines can potentially perform optimizations on their systems, such as query expansion and file indexing. In this work we propose a method that suggests a list of related queries given an initial input query. The related queries are based in the query log of previously submitted queries by human users, which can be identified using an enhanced model of association rules. Users can utilize the suggested related queries to tune or redirect the search process. Our method not only discovers the related queries, but also ranks them according to the degree of their relatedness. Unlike many other rival techniques, it also performs reasonably well on less frequent input queries.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1871-1883
  12. Fenstermacher, K.D.; Ginsburg, M.: Client-side monitoring for Web mining (2003) 0.01
    0.0073006856 = product of:
      0.029202743 = sum of:
        0.0073573855 = weight(_text_:information in 1611) [ClassicSimilarity], result of:
          0.0073573855 = score(doc=1611,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.116372846 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.021845357 = weight(_text_:retrieval in 1611) [ClassicSimilarity], result of:
          0.021845357 = score(doc=1611,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.20052543 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
      0.25 = coord(2/8)
    
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.625-637
  13. Haravu, L.J.; Neelameghan, A.: Text mining and data mining in knowledge organization and discovery : the making of knowledge-based products (2003) 0.01
    0.0072059836 = product of:
      0.028823934 = sum of:
        0.010619472 = weight(_text_:information in 5653) [ClassicSimilarity], result of:
          0.010619472 = score(doc=5653,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16796975 = fieldWeight in 5653, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5653)
        0.018204464 = weight(_text_:retrieval in 5653) [ClassicSimilarity], result of:
          0.018204464 = score(doc=5653,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.16710453 = fieldWeight in 5653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5653)
      0.25 = coord(2/8)
    
    Abstract
    Discusses the importance of knowledge organization in the context of the information overload caused by the vast quantities of data and information accessible on internal and external networks of an organization. Defines the characteristics of a knowledge-based product. Elaborates on the techniques and applications of text mining in developing knowledge products. Presents two approaches, as case studies, to the making of knowledge products: (1) steps and processes in the planning, designing and development of a composite multilingual multimedia CD product, with the potential international, inter-cultural end users in view, and (2) application of natural language processing software in text mining. Using a text mining software, it is possible to link concept terms from a processed text to a related thesaurus, glossary, schedules of a classification scheme, and facet structured subject representations. Concludes that the products of text mining and data mining could be made more useful if the features of a faceted scheme for subject classification are incorporated into text mining techniques and products.
    Content
    Beitrag eines Themenheftes "Knowledge organization and classification in international information retrieval"
  14. Liu, Y.; Huang, X.; An, A.: Personalized recommendation with adaptive mixture of markov models (2007) 0.01
    0.0072059836 = product of:
      0.028823934 = sum of:
        0.010619472 = weight(_text_:information in 606) [ClassicSimilarity], result of:
          0.010619472 = score(doc=606,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16796975 = fieldWeight in 606, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
        0.018204464 = weight(_text_:retrieval in 606) [ClassicSimilarity], result of:
          0.018204464 = score(doc=606,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.16710453 = fieldWeight in 606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
      0.25 = coord(2/8)
    
    Abstract
    With more and more information available on the Internet, the task of making personalized recommendations to assist the user's navigation has become increasingly important. Considering there might be millions of users with different backgrounds accessing a Web site everyday, it is infeasible to build a separate recommendation system for each user. To address this problem, clustering techniques can first be employed to discover user groups. Then, user navigation patterns for each group can be discovered, to allow the adaptation of a Web site to the interest of each individual group. In this paper, we propose to model user access sequences as stochastic processes, and a mixture of Markov models based approach is taken to cluster users and to capture the sequential relationships inherent in user access histories. Several important issues that arise in constructing the Markov models are also addressed. The first issue lies in the complexity of the mixture of Markov models. To improve the efficiency of building/maintaining the mixture of Markov models, we develop a lightweight adapt-ive algorithm to update the model parameters without recomputing model parameters from scratch. The second issue concerns the proper selection of training data for building the mixture of Markov models. We investigate two different training data selection strategies and perform extensive experiments to compare their effectiveness on a real dataset that is generated by a Web-based knowledge management system, Livelink.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1851-1870
  15. Lihui, C.; Lian, C.W.: Using Web structure and summarisation techniques for Web content mining (2005) 0.01
    0.0072059836 = product of:
      0.028823934 = sum of:
        0.010619472 = weight(_text_:information in 1046) [ClassicSimilarity], result of:
          0.010619472 = score(doc=1046,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16796975 = fieldWeight in 1046, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1046)
        0.018204464 = weight(_text_:retrieval in 1046) [ClassicSimilarity], result of:
          0.018204464 = score(doc=1046,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.16710453 = fieldWeight in 1046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1046)
      0.25 = coord(2/8)
    
    Abstract
    The dynamic nature and size of the Internet can result in difficulty finding relevant information. Most users typically express their information need via short queries to search engines and they often have to physically sift through the search results based on relevance ranking set by the search engines, making the process of relevance judgement time-consuming. In this paper, we describe a novel representation technique which makes use of the Web structure together with summarisation techniques to better represent knowledge in actual Web Documents. We named the proposed technique as Semantic Virtual Document (SVD). We will discuss how the proposed SVD can be used together with a suitable clustering algorithm to achieve an automatic content-based categorization of similar Web Documents. The auto-categorization facility as well as a "Tree-like" Graphical User Interface (GUI) for post-retrieval document browsing enhances the relevance judgement process for Internet users. Furthermore, we will introduce how our cluster-biased automatic query expansion technique can be used to overcome the ambiguity of short queries typically given by users. We will outline our experimental design to evaluate the effectiveness of the proposed SVD for representation and present a prototype called iSEARCH (Intelligent SEarch And Review of Cluster Hierarchy) for Web content mining. Our results confirm, quantify and extend previous research using Web structure and summarisation techniques, introducing novel techniques for knowledge representation to enhance Web content mining.
    Source
    Information processing and management. 41(2005) no.5, S.1225-1242
  16. Srinivasan, P.: Text mining in biomedicine : challenges and opportunities (2006) 0.00
    0.0043012216 = product of:
      0.017204886 = sum of:
        0.0073573855 = weight(_text_:information in 1497) [ClassicSimilarity], result of:
          0.0073573855 = score(doc=1497,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.116372846 = fieldWeight in 1497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1497)
        0.0098475 = product of:
          0.0295425 = sum of:
            0.0295425 = weight(_text_:29 in 1497) [ClassicSimilarity], result of:
              0.0295425 = score(doc=1497,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23319192 = fieldWeight in 1497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1497)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Date
    29. 2.2008 17:14:09
    Source
    Knowledge organization, information systems and other essays: Professor A. Neelameghan Festschrift. Ed. by K.S. Raghavan and K.N. Prasad
  17. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.00
    0.0042192535 = product of:
      0.016877014 = sum of:
        0.008670762 = weight(_text_:information in 3603) [ClassicSimilarity], result of:
          0.008670762 = score(doc=3603,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.13714671 = fieldWeight in 3603, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3603)
        0.008206251 = product of:
          0.024618752 = sum of:
            0.024618752 = weight(_text_:29 in 3603) [ClassicSimilarity], result of:
              0.024618752 = score(doc=3603,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19432661 = fieldWeight in 3603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  18. Chen, S.Y.; Liu, X.: ¬The contribution of data mining to information science : making sense of it all (2005) 0.00
    0.0026012284 = product of:
      0.020809827 = sum of:
        0.020809827 = weight(_text_:information in 4655) [ClassicSimilarity], result of:
          0.020809827 = score(doc=4655,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3291521 = fieldWeight in 4655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4655)
      0.125 = coord(1/8)
    
    Source
    Journal of information science. 30(2005) no.6, S.550-
  19. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.00
    0.0025204786 = product of:
      0.010081914 = sum of:
        0.005202457 = weight(_text_:information in 1178) [ClassicSimilarity], result of:
          0.005202457 = score(doc=1178,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.08228803 = fieldWeight in 1178, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1178)
        0.0048794574 = product of:
          0.014638372 = sum of:
            0.014638372 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.014638372 = score(doc=1178,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
    Jede Kasse schickt die Daten zu Stornos, Rückgaben, Korrekturen und dergleichen an eine zentrale Datenbank. Aus den Informationen errechnet das Programm Kassiererprofile. Wessen Arbeit stark Durchschnitt abweicht, macht sich verdächtig. Die Kriterien" legen im Einzelnen die Revisionsabteilungen fest, doch generell gilt: "Bei Auffälligkeiten wie überdurchschnittlichvielenStornierungen, Off nen der Kassenschublade ohne Verkauf nach einem Storno oder Warenrücknahmen ohne Kassenbon, können die Vorgänge nachträglich einzelnen Personen zugeordnet werden", sagt Rene Schiller, Marketing-Chef des Lord-Herstellers Logware. Ein Kündigungsgrund ist eine solche Datensammlung vor Gericht nicht. Doch auf der Basis können Unternehmen gezielt Detektive einsetzen. Oder sie konfrontieren die Mitarbeiter mit dem Material; woraufhin Schuldige meist gestehen. Wilke sieht Programme wie Lord kritisch:"Jeder, der in dem Raster auffällt, kann ein potenzieller Betrüger oder Dieb sein und verdient besondere Beobachtung." Dabei könne man vom Standard abweichen, weil man unausgeschlafen und deshalb unkonzentriert sei. Hier tut sich für Wilke die Gefahr technisierter Leistungskontrolle auf. "Es ist ja nicht schwierig, mit den Programmen zu berechnen, wie lange beispielsweise das Kassieren eines Samstagseinkaufs durchschnittlich dauert." Die Betriebsräte - ihre Zustimmung ist beim Einsatz technischer Kon trolleinrichtungen nötig - verurteilen die wertende Software weniger eindeutig. Im Gegenteil: Bei Kaufhof und Edeka haben sie dem Einsatz zugestimmt. Denn: "Die wollen ja nicht, dass ganze Abteilungen wegen Inventurverlusten oder dergleichen unter Generalverdacht fallen", erklärt Gewerkschaftler Wilke: "Angesichts der Leistungen kommerzieller Data-Mining-Programme verblüfft es, dass in den Vereinigten Staaten das "Information Awareness Office" noch drei Jahre für Forschung und Erprobung der eigenen Programme veranschlagt. 2005 sollen frühe Prototypen zur Terroristensuche einesgetz werden. Doch schon jetzt regt sich Protest. Datenschützer wie Marc Botenberg vom Informationszentrum für Daten schutz sprechen vom "ehrgeizigsten öffentlichen Überwachungssystem, das je vorgeschlagen wurde". Sie warnen besonders davor, Daten aus der Internetnutzung und private Mails auszuwerten. Das Verteidigungsministerium rudert zurück. Man denke nicht daran, über die Software im Inland aktiv zu werden. "Das werden die Geheimdienste, die Spionageabwehr und die Strafverfolger tun", sagt Unterstaatssekretär Edward Aldridge. Man werde während der Entwicklung und der Tests mit konstruierten und einigen - aus Sicht der Datenschützer unbedenklichen - realen Informationen arbeiten. Zu denken gibt jedoch Aldriges Antwort auf die Frage, warum so viel Geld für die Entwicklung von Übersetzungssoftware eingeplant ist: Damit man Datenbanken in anderen Sprachen nutzen könne - sofern man auf sie rechtmäßigen Zugriff bekommt."
  20. Keim, D.A.: Data Mining mit bloßem Auge (2002) 0.00
    0.002461875 = product of:
      0.019695 = sum of:
        0.019695 = product of:
          0.059085 = sum of:
            0.059085 = weight(_text_:29 in 1086) [ClassicSimilarity], result of:
              0.059085 = score(doc=1086,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.46638384 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    31.12.1996 19:29:41

Languages

  • e 37
  • d 10