Search (132 results, page 1 of 7)

  • × theme_ss:"Data Mining"
  1. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.07
    0.07491614 = product of:
      0.26220647 = sum of:
        0.009061059 = weight(_text_:information in 5997) [ClassicSimilarity], result of:
          0.009061059 = score(doc=5997,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13714671 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.2531454 = weight(_text_:kongress in 5997) [ClassicSimilarity], result of:
          0.2531454 = score(doc=5997,freq=16.0), product of:
            0.24693015 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.037635546 = queryNorm
            1.0251701 = fieldWeight in 5997, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.2857143 = coord(2/7)
    
    Abstract
    Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology.
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
    RSWK
    Datenanalyse / Kongress / Passau <2000>
    Automatische Klassifikation / Kongress / Passau <2000>
    Data Mining / Kongress / Passau <2000>
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
    Subject
    Datenanalyse / Kongress / Passau <2000>
    Automatische Klassifikation / Kongress / Passau <2000>
    Data Mining / Kongress / Passau <2000>
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
  2. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.02
    0.019390026 = product of:
      0.06786509 = sum of:
        0.02005751 = weight(_text_:information in 2908) [ClassicSimilarity], result of:
          0.02005751 = score(doc=2908,freq=10.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.3035872 = fieldWeight in 2908, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2908)
        0.047807574 = product of:
          0.07171136 = sum of:
            0.03601768 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
              0.03601768 = score(doc=2908,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.27205724 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
            0.03569368 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.03569368 = score(doc=2908,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.6666667 = coord(2/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Focuses on the information modelling side of conceptual modelling. Deals with the exploitation of fact verbalisations after finishing the actual information system. Verbalisations are used as input for the design of the so-called information model. Exploits these verbalisation in 4 directions: considers their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database and for the verbalisation of queries in a computer supported query environment. Provides an example session with an envisioned tool for end user query formulations that exploits the verbalisation
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  3. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.018539615 = product of:
      0.06488865 = sum of:
        0.010251419 = weight(_text_:information in 1270) [ClassicSimilarity], result of:
          0.010251419 = score(doc=1270,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1551638 = fieldWeight in 1270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1270)
        0.05463723 = product of:
          0.08195584 = sum of:
            0.041163065 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.041163065 = score(doc=1270,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.040792778 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.040792778 = score(doc=1270,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.2857143 = coord(2/7)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  4. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.02
    0.018041939 = product of:
      0.063146785 = sum of:
        0.012814272 = weight(_text_:information in 607) [ClassicSimilarity], result of:
          0.012814272 = score(doc=607,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 607, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
        0.050332513 = weight(_text_:retrieval in 607) [ClassicSimilarity], result of:
          0.050332513 = score(doc=607,freq=14.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.442117 = fieldWeight in 607, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
      0.2857143 = coord(2/7)
    
    Abstract
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1884-1898
  5. Lam, W.; Yang, C.C.; Menczer, F.: Introduction to the special topic section on mining Web resources for enhancing information retrieval (2007) 0.02
    0.01703923 = product of:
      0.059637304 = sum of:
        0.0219719 = weight(_text_:information in 600) [ClassicSimilarity], result of:
          0.0219719 = score(doc=600,freq=12.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.3325631 = fieldWeight in 600, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
        0.037665404 = weight(_text_:retrieval in 600) [ClassicSimilarity], result of:
          0.037665404 = score(doc=600,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33085006 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
      0.2857143 = coord(2/7)
    
    Abstract
    The amount of information on the Web has been expanding at an enormous pace. There are a variety of Web documents in different genres, such as news, reports, reviews. Traditionally, the information displayed on Web sites has been static. Recently, there are many Web sites offering content that is dynamically generated and frequently updated. It is also common for Web sites to contain information in different languages since many countries adopt more than one language. Moreover, content may exist in multimedia formats including text, images, video, and audio.
    Footnote
    Einführung in einen Themenschwerpunkt "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1791-1792
  6. Ayadi, H.; Torjmen-Khemakhem, M.; Daoud, M.; Huang, J.X.; Jemaa, M.B.: Mining correlations between medically dependent features and image retrieval models for query classification (2017) 0.02
    0.016969591 = product of:
      0.05939357 = sum of:
        0.009061059 = weight(_text_:information in 3607) [ClassicSimilarity], result of:
          0.009061059 = score(doc=3607,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13714671 = fieldWeight in 3607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3607)
        0.050332513 = weight(_text_:retrieval in 3607) [ClassicSimilarity], result of:
          0.050332513 = score(doc=3607,freq=14.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.442117 = fieldWeight in 3607, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3607)
      0.2857143 = coord(2/7)
    
    Abstract
    The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1323-1334
  7. Sánchez, D.; Chamorro-Martínez, J.; Vila, M.A.: Modelling subjectivity in visual perception of orientation for image retrieval (2003) 0.02
    0.015241695 = product of:
      0.05334593 = sum of:
        0.007688564 = weight(_text_:information in 1067) [ClassicSimilarity], result of:
          0.007688564 = score(doc=1067,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.116372846 = fieldWeight in 1067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
        0.045657367 = weight(_text_:retrieval in 1067) [ClassicSimilarity], result of:
          0.045657367 = score(doc=1067,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.40105087 = fieldWeight in 1067, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1067)
      0.2857143 = coord(2/7)
    
    Abstract
    In this paper we combine computer vision and data mining techniques to model high-level concepts for image retrieval, on the basis of basic perceptual features of the human visual system. High-level concepts related to these features are learned and represented by means of a set of fuzzy association rules. The concepts so acquired can be used for image retrieval with the advantage that it is not needed to provide an image as a query. Instead, a query is formulated by using the labels that identify the learned concepts as search terms, and the retrieval process calculates the relevance of an image to the query by an inference mechanism. An additional feature of our methodology is that it can capture user's subjectivity. For that purpose, fuzzy sets theory is employed to measure user's assessments about the fulfillment of a concept by an image.
    Source
    Information processing and management. 39(2003) no.2, S.251-266
  8. Sarnikar, S.; Zhang, Z.; Zhao, J.L.: Query-performance prediction for effective query routing in domain-specific repositories (2014) 0.02
    0.015102121 = product of:
      0.05285742 = sum of:
        0.013316983 = weight(_text_:information in 1326) [ClassicSimilarity], result of:
          0.013316983 = score(doc=1326,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.20156369 = fieldWeight in 1326, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1326)
        0.03954044 = weight(_text_:retrieval in 1326) [ClassicSimilarity], result of:
          0.03954044 = score(doc=1326,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.34732026 = fieldWeight in 1326, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1326)
      0.2857143 = coord(2/7)
    
    Abstract
    The effective use of corporate memory is becoming increasingly important because every aspect of e-business requires access to information repositories. Unfortunately, less-than-satisfying effectiveness in state-of-the-art information-retrieval techniques is well known, even for some of the best search engines such as Google. In this study, the authors resolve this retrieval ineffectiveness problem by developing a new framework for predicting query performance, which is the first step toward better retrieval effectiveness. Specifically, they examine the relationship between query performance and query context. A query context consists of the query itself, the document collection, and the interaction between the two. The authors first analyze the characteristics of query context and develop various features for predicting query performance. Then, they propose a context-sensitive model for predicting query performance based on the characteristics of the query and the document collection. Finally, they validate this model with respect to five real-world collections of documents and demonstrate its utility in routing queries to the correct repository with high accuracy.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.8, S.1597-1614
  9. Saz, J.T.: Perspectivas en recuperacion y explotacion de informacion electronica : el 'data mining' (1997) 0.01
    0.014532023 = product of:
      0.050862078 = sum of:
        0.012814272 = weight(_text_:information in 3723) [ClassicSimilarity], result of:
          0.012814272 = score(doc=3723,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3723)
        0.038047805 = weight(_text_:retrieval in 3723) [ClassicSimilarity], result of:
          0.038047805 = score(doc=3723,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33420905 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3723)
      0.2857143 = coord(2/7)
    
    Footnote
    Übers. des Titels: Perspectives on the retrieval and exploitation of electronic information: data mining
  10. Knowledge management in fuzzy databases (2000) 0.01
    0.014385968 = product of:
      0.050350886 = sum of:
        0.012685482 = weight(_text_:information in 4260) [ClassicSimilarity], result of:
          0.012685482 = score(doc=4260,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1920054 = fieldWeight in 4260, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4260)
        0.037665404 = weight(_text_:retrieval in 4260) [ClassicSimilarity], result of:
          0.037665404 = score(doc=4260,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33085006 = fieldWeight in 4260, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4260)
      0.2857143 = coord(2/7)
    
    Abstract
    The volume presents recent developments in the introduction of fuzzy, probabilistic and rough elements into basic components of fuzzy databases, and their use (notably querying and information retrieval), from the point of view of data mining and knowledge discovery. The main novel aspect of the volume is that issues related to the use of fuzzy elements in databases, database querying, information retrieval, etc. are presented and discussed from the point of view, and for the purpose of data mining and knowledge discovery that are 'hot topics' in recent years
  11. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.01
    0.014257732 = product of:
      0.04990206 = sum of:
        0.01695169 = weight(_text_:information in 605) [ClassicSimilarity], result of:
          0.01695169 = score(doc=605,freq=14.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.256578 = fieldWeight in 605, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
        0.032950368 = weight(_text_:retrieval in 605) [ClassicSimilarity], result of:
          0.032950368 = score(doc=605,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.28943354 = fieldWeight in 605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
      0.2857143 = coord(2/7)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1838-1850
  12. Gaizauskas, R.; Wilks, Y.: Information extraction : beyond document retrieval (1998) 0.01
    0.013617646 = product of:
      0.04766176 = sum of:
        0.015377128 = weight(_text_:information in 4716) [ClassicSimilarity], result of:
          0.015377128 = score(doc=4716,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.23274569 = fieldWeight in 4716, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4716)
        0.032284632 = weight(_text_:retrieval in 4716) [ClassicSimilarity], result of:
          0.032284632 = score(doc=4716,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.2835858 = fieldWeight in 4716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4716)
      0.2857143 = coord(2/7)
    
    Abstract
    In this paper we give a synoptic view of the growth of the text processing technology of informatione xtraction (IE) whose function is to extract information about a pre-specified set of entities, relations or events from natural language texts and to record this information in structured representations called templates. Here we describe the nature of the IE task, review the history of the area from its origins in AI work in the 1960s and 70s till the present, discuss the techniques being used to carry out the task, describe application areas where IE systems are or are about to be at work, and conclude with a discussion of the challenges facing the area. What emerges is a picture of an exciting new text processing technology with a host of new applications, both on its own and in conjunction with other technologies, such as information retrieval, machine translation and data mining
  13. Survey of text mining : clustering, classification, and retrieval (2004) 0.01
    0.013075612 = product of:
      0.04576464 = sum of:
        0.012814272 = weight(_text_:information in 804) [ClassicSimilarity], result of:
          0.012814272 = score(doc=804,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.032950368 = weight(_text_:retrieval in 804) [ClassicSimilarity], result of:
          0.032950368 = score(doc=804,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.28943354 = fieldWeight in 804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.2857143 = coord(2/7)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
    LCSH
    Data mining ; Information retrieval
    Subject
    Data mining ; Information retrieval
  14. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.01
    0.013029033 = product of:
      0.045601614 = sum of:
        0.013316983 = weight(_text_:information in 92) [ClassicSimilarity], result of:
          0.013316983 = score(doc=92,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.20156369 = fieldWeight in 92, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=92)
        0.032284632 = weight(_text_:retrieval in 92) [ClassicSimilarity], result of:
          0.032284632 = score(doc=92,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.2835858 = fieldWeight in 92, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=92)
      0.2857143 = coord(2/7)
    
    Abstract
    In this paper the authors will present research on the combination of two methods of data mining: text classification and maximal association rules. Text classification has been the focus of interest of many researchers for a long time. However, the results take the form of lists of words (classes) that people often do not know what to do with. The use of maximal association rules induced a number of advantages: (1) the detection of dependencies and correlations between the relevant units of information (words) of different classes, (2) the extraction of hidden knowledge, often relevant, from a large volume of data. The authors will show how this combination can improve the process of information retrieval.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  15. Berry, M.W.; Esau, R.; Kiefer, B.: ¬The use of text mining techniques in electronic discovery for legal matters (2012) 0.01
    0.012330831 = product of:
      0.043157905 = sum of:
        0.010873271 = weight(_text_:information in 91) [ClassicSimilarity], result of:
          0.010873271 = score(doc=91,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16457605 = fieldWeight in 91, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=91)
        0.032284632 = weight(_text_:retrieval in 91) [ClassicSimilarity], result of:
          0.032284632 = score(doc=91,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.2835858 = fieldWeight in 91, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=91)
      0.2857143 = coord(2/7)
    
    Abstract
    Electronic discovery (eDiscovery) is the process of collecting and analyzing electronic documents to determine their relevance to a legal matter. Office technology has advanced and eased the requirements necessary to create a document. As such, the volume of data has outgrown the manual processes previously used to make relevance judgments. Methods of text mining and information retrieval have been put to use in eDiscovery to help tame the volume of data; however, the results have been uneven. This chapter looks at the historical bias of the collection process. The authors examine how tools like classifiers, latent semantic analysis, and non-negative matrix factorization deal with nuances of the collection process.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  16. Baeza-Yates, R.; Hurtado, C.; Mendoza, M.: Improving search engines by query clustering (2007) 0.01
    0.012048556 = product of:
      0.042169943 = sum of:
        0.015536481 = weight(_text_:information in 601) [ClassicSimilarity], result of:
          0.015536481 = score(doc=601,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.23515764 = fieldWeight in 601, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=601)
        0.026633464 = weight(_text_:retrieval in 601) [ClassicSimilarity], result of:
          0.026633464 = score(doc=601,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.23394634 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=601)
      0.2857143 = coord(2/7)
    
    Abstract
    In this paper, we present a framework for clustering Web search engine queries whose aim is to identify groups of queries used to search for similar information on the Web. The framework is based on a novel term vector model of queries that integrates user selections and the content of selected documents extracted from the logs of a search engine. The query representation obtained allows us to treat query clustering similarly to standard document clustering. We study the application of the clustering framework to two problems: relevance ranking boosting and query recommendation. Finally, we evaluate with experiments the effectiveness of our approach.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1793-1804
  17. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.01
    0.01198622 = product of:
      0.041951768 = sum of:
        0.017939983 = weight(_text_:information in 3835) [ClassicSimilarity], result of:
          0.017939983 = score(doc=3835,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.27153665 = fieldWeight in 3835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3835)
        0.024011787 = product of:
          0.07203536 = sum of:
            0.07203536 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.07203536 = score(doc=3835,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    29. 3.2002 17:31:17
  18. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.011924506 = product of:
      0.04173577 = sum of:
        0.017939983 = weight(_text_:information in 4577) [ClassicSimilarity], result of:
          0.017939983 = score(doc=4577,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.27153665 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.023795787 = product of:
          0.07138736 = sum of:
            0.07138736 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.07138736 = score(doc=4577,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    2. 4.2000 18:01:22
  19. Chen, Y.-L.; Liu, Y.-H.; Ho, W.-L.: ¬A text mining approach to assist the general public in the retrieval of legal documents (2013) 0.01
    0.011420914 = product of:
      0.039973196 = sum of:
        0.007688564 = weight(_text_:information in 521) [ClassicSimilarity], result of:
          0.007688564 = score(doc=521,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.116372846 = fieldWeight in 521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=521)
        0.032284632 = weight(_text_:retrieval in 521) [ClassicSimilarity], result of:
          0.032284632 = score(doc=521,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.2835858 = fieldWeight in 521, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=521)
      0.2857143 = coord(2/7)
    
    Abstract
    Applying text mining techniques to legal issues has been an emerging research topic in recent years. Although some previous studies focused on assisting professionals in the retrieval of related legal documents, they did not take into account the general public and their difficulty in describing legal problems in professional legal terms. Because this problem has not been addressed by previous research, this study aims to design a text-mining-based method that allows the general public to use everyday vocabulary to search for and retrieve criminal judgments. The experimental results indicate that our method can help the general public, who are not familiar with professional legal terms, to acquire relevant criminal judgments more accurately and effectively.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.280-290
  20. Perugini, S.; Ramakrishnan, N.: Mining Web functional dependencies for flexible information access (2007) 0.01
    0.010915946 = product of:
      0.03820581 = sum of:
        0.015377128 = weight(_text_:information in 602) [ClassicSimilarity], result of:
          0.015377128 = score(doc=602,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.23274569 = fieldWeight in 602, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
        0.022828683 = weight(_text_:retrieval in 602) [ClassicSimilarity], result of:
          0.022828683 = score(doc=602,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.20052543 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
      0.2857143 = coord(2/7)
    
    Abstract
    We present an approach to enhancing information access through Web structure mining in contrast to traditional approaches involving usage mining. Specifically, we mine the hardwired hierarchical hyperlink structure of Web sites to identify patterns of term-term co-occurrences we call Web functional dependencies (FDs). Intuitively, a Web FD x -> y declares that all paths through a site involving a hyperlink labeled x also contain a hyperlink labeled y. The complete set of FDs satisfied by a site help characterize (flexible and expressive) interaction paradigms supported by a site, where a paradigm is the set of explorable sequences therein. We describe algorithms for mining FDs and results from mining several hierarchical Web sites and present several interface designs that can exploit such FDs to provide compelling user experiences.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1805-1819

Years

Languages

  • e 109
  • d 22
  • sp 1
  • More… Less…

Types

  • a 111
  • m 16
  • s 14
  • el 5
  • x 1
  • More… Less…