Search (2500 results, page 2 of 125)

  • × language_ss:"e"
  1. Mining text data (2012) 0.09
    0.08546503 = product of:
      0.17093006 = sum of:
        0.17093006 = product of:
          0.34186012 = sum of:
            0.34186012 = weight(_text_:mining in 362) [ClassicSimilarity], result of:
              0.34186012 = score(doc=362,freq=46.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.1959045 = fieldWeight in 362, product of:
                  6.78233 = tf(freq=46.0), with freq of:
                    46.0 = termFreq=46.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    Content
    Inhalt: An Introduction to Text Mining.- Information Extraction from Text.- A Survey of Text Summarization Techniques.- A Survey of Text Clustering Algorithms.- Dimensionality Reduction and Topic Modeling.- A Survey of Text Classification Algorithms.- Transfer Learning for Text Mining.- Probabilistic Models for Text Mining.- Mining Text Streams.- Translingual Mining from Text Data.- Text Mining in Multimedia.- Text Analytics in Social Media.- A Survey of Opinion Mining and Sentiment Analysis.- Biomedical Text Mining: A Survey of Recent Progress.- Index.
    LCSH
    Data mining
    RSWK
    Text Mining / Aufsatzsammlung
    Subject
    Text Mining / Aufsatzsammlung
    Data mining
    Theme
    Data Mining
  2. Kulathuramaiyer, N.; Maurer, H.: Implications of emerging data mining (2009) 0.08
    0.084530964 = product of:
      0.16906193 = sum of:
        0.16906193 = product of:
          0.33812386 = sum of:
            0.33812386 = weight(_text_:mining in 3144) [ClassicSimilarity], result of:
              0.33812386 = score(doc=3144,freq=20.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.1828341 = fieldWeight in 3144, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Data Mining describes a technology that discovers non-trivial hidden patterns in a large collection of data. Although this technology has a tremendous impact on our lives, the invaluable contributions of this invisible technology often go unnoticed. This paper discusses advances in data mining while focusing on the emerging data mining capability. Such data mining applications perform multidimensional mining on a wide variety of heterogeneous data sources, providing solutions to many unresolved problems. This paper also highlights the advantages and disadvantages arising from the ever-expanding scope of data mining. Data Mining augments human intelligence by equipping us with a wealth of knowledge and by empowering us to perform our daily tasks better. As the mining scope and capacity increases, users and organizations become more willing to compromise privacy. The huge data stores of the 'master miners' allow them to gain deep insights into individual lifestyles and their social and behavioural patterns. Data integration and analysis capability of combining business and financial trends together with the ability to deterministically track market changes will drastically affect our lives.
    Theme
    Data Mining
  3. Raghavan, V.V.; Deogun, J.S.; Sever, H.: Knowledge discovery and data mining : introduction (1998) 0.08
    0.082510956 = product of:
      0.16502191 = sum of:
        0.16502191 = product of:
          0.33004382 = sum of:
            0.33004382 = weight(_text_:mining in 2899) [ClassicSimilarity], result of:
              0.33004382 = score(doc=2899,freq=14.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.1545684 = fieldWeight in 2899, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Defines knowledge discovery and database mining. The challenge for knowledge discovery in databases (KDD) is to automatically process large quantities of raw data, identifying the most significant and meaningful patterns, and present these as as knowledge appropriate for achieving a user's goals. Data mining is the process of deriving useful knowledge from real world databases through the application of pattern extraction techniques. Explains the goals of, and motivation for, research work on data mining. Discusses the nature of database contents, along with problems within the field of data mining
    Footnote
    Contribution to a special issue devoted to knowledge discovery and data mining
    Theme
    Data Mining
  4. Zhou, L.; Chaovalit, P.: Ontology-supported polarity mining (2008) 0.08
    0.082510956 = product of:
      0.16502191 = sum of:
        0.16502191 = product of:
          0.33004382 = sum of:
            0.33004382 = weight(_text_:mining in 1343) [ClassicSimilarity], result of:
              0.33004382 = score(doc=1343,freq=14.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.1545684 = fieldWeight in 1343, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1343)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Polarity mining provides an in-depth analysis of semantic orientations of text information. Motivated by its success in the area of topic mining, we propose an ontology-supported polarity mining (OSPM) approach. The approach aims to enhance polarity mining with ontology by providing detailed topic-specific information. OSPM was evaluated in the movie review domain using both supervised and unsupervised techniques. Results revealed that OSPM outperformed the baseline method without ontology support. The findings of this study not only advance the state of polarity mining research but also shed light on future research directions.
    Theme
    Data Mining
  5. Ku, L.-W.; Ho, H.-W.; Chen, H.-H.: Opinion mining and relationship discovery using CopeOpi opinion analysis system (2009) 0.08
    0.080165744 = product of:
      0.16033149 = sum of:
        0.16033149 = sum of:
          0.12601131 = weight(_text_:mining in 2938) [ClassicSimilarity], result of:
            0.12601131 = score(doc=2938,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.44081625 = fieldWeight in 2938, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
          0.034320172 = weight(_text_:22 in 2938) [ClassicSimilarity], result of:
            0.034320172 = score(doc=2938,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 2938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
      0.5 = coord(1/2)
    
    Abstract
    We present CopeOpi, an opinion-analysis system, which extracts from the Web opinions about specific targets, summarizes the polarity and strength of these opinions, and tracks opinion variations over time. Objects that yield similar opinion tendencies over a certain time period may be correlated due to the latent causal events. CopeOpi discovers relationships among objects based on their opinion-tracking plots and collocations. Event bursts are detected from the tracking plots, and the strength of opinion relationships is determined by the coverage of these plots. To evaluate opinion mining, we use the NTCIR corpus annotated with opinion information at sentence and document levels. CopeOpi achieves sentence- and document-level f-measures of 62% and 74%. For relationship discovery, we collected 1.3M economics-related documents from 93 Web sources over 22 months, and analyzed collocation-based, opinion-based, and hybrid models. We consider as correlated company pairs that demonstrate similar stock-price variations, and selected these as the gold standard for evaluation. Results show that opinion-based and collocation-based models complement each other, and that integrated models perform the best. The top 25, 50, and 100 pairs discovered achieve precision rates of 1, 0.92, and 0.79, respectively.
  6. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.08
    0.080165744 = product of:
      0.16033149 = sum of:
        0.16033149 = sum of:
          0.12601131 = weight(_text_:mining in 1605) [ClassicSimilarity], result of:
            0.12601131 = score(doc=1605,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.44081625 = fieldWeight in 1605, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.034320172 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.034320172 = score(doc=1605,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Theme
    Data Mining
  7. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.08
    0.080165744 = product of:
      0.16033149 = sum of:
        0.16033149 = sum of:
          0.12601131 = weight(_text_:mining in 1171) [ClassicSimilarity], result of:
            0.12601131 = score(doc=1171,freq=4.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.44081625 = fieldWeight in 1171, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.034320172 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.034320172 = score(doc=1171,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.19345059 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
      0.5 = coord(1/2)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  8. Fayyad, U.M.: Data mining and knowledge dicovery : making sense out of data (1996) 0.08
    0.077165864 = product of:
      0.15433173 = sum of:
        0.15433173 = product of:
          0.30866346 = sum of:
            0.30866346 = weight(_text_:mining in 7007) [ClassicSimilarity], result of:
              0.30866346 = score(doc=7007,freq=6.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.079775 = fieldWeight in 7007, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7007)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Defines knowledge discovery and data mining (KDD) as the overall process of extracting high level knowledge from low level data. Outlines the KDD process. Explains how KDD is related to the fields of: statistics, pattern recognition, machine learning, artificial intelligence, databases and data warehouses
    Theme
    Data Mining
  9. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.08
    0.076390296 = product of:
      0.15278059 = sum of:
        0.15278059 = product of:
          0.30556118 = sum of:
            0.30556118 = weight(_text_:mining in 568) [ClassicSimilarity], result of:
              0.30556118 = score(doc=568,freq=12.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.0689225 = fieldWeight in 568, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
  10. Data mining, data warehousing and client/server databases : Proceedings of the 8th International Hong Kong Computer Society Database Workshop (Academic Stream) (1997) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 977) [ClassicSimilarity], result of:
              0.30242714 = score(doc=977,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 977, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=977)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  11. Data mining, data warehousing and client/server databases : Proceedings of the 8th International Hong Kong Computer Society Database Workshop (Industrial Stream) (1997) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 2301) [ClassicSimilarity], result of:
              0.30242714 = score(doc=2301,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 2301, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2301)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  12. Cios, K.J.; Pedrycz, W.; Swiniarksi, R.: Data mining methods for knowledge discovery (1998) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 6075) [ClassicSimilarity], result of:
              0.30242714 = score(doc=6075,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 6075, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6075)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  13. Advances in knowledge discovery and data mining (1996) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 413) [ClassicSimilarity], result of:
              0.30242714 = score(doc=413,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 413, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=413)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  14. Intelligent information processing and web mining : Proceedings of the International IIS: IIPWM'03 Conference held in Zakopane, Poland, June 2-5, 2003 (2003) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 4642) [ClassicSimilarity], result of:
              0.30242714 = score(doc=4642,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 4642, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4642)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  15. Chen, S.Y.; Liu, X.: ¬The contribution of data mining to information science : making sense of it all (2005) 0.08
    0.075606786 = product of:
      0.15121357 = sum of:
        0.15121357 = product of:
          0.30242714 = sum of:
            0.30242714 = weight(_text_:mining in 4655) [ClassicSimilarity], result of:
              0.30242714 = score(doc=4655,freq=4.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.057959 = fieldWeight in 4655, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4655)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  16. Toldo, L.; Rippmann, F.: Integrated bioinformatics application for automated target discovery. (2005) 0.07
    0.074054174 = product of:
      0.14810835 = sum of:
        0.14810835 = sum of:
          0.10692415 = weight(_text_:mining in 5260) [ClassicSimilarity], result of:
            0.10692415 = score(doc=5260,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.37404498 = fieldWeight in 5260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.046875 = fieldNorm(doc=5260)
          0.0411842 = weight(_text_:22 in 5260) [ClassicSimilarity], result of:
            0.0411842 = score(doc=5260,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.23214069 = fieldWeight in 5260, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5260)
      0.5 = coord(1/2)
    
    Abstract
    In this article we present an in silico method that automatically assigns putative functions to DNA sequences. The annotations are at an increasingly conceptual level, up to identifying general biomedical fields to which the sequences could contribute. This bioinformatics data-mining system makes substantial use of several resources: a locally stored MEDLINE® database; a manually built classification system; the MeSH® taxonomy; relational technology; and bioinformatics methods. Knowledge is generated from various data sources by using well-defined semantics, and by exploiting direct links between them. A two-dimensional Concept Map(TM) displays the knowledge graph, which allows causal connections to be followed. The use of this method has been valuable and has saved considerable time in our in-house projects, and can be generally exploited for any sequence-annotation or knowledge-condensation task.
    Date
    22. 7.2006 14:31:06
  17. Arbelaitz, O.; Martínez-Otzeta. J.M.; Muguerza, J.: User modeling in a social network for cognitively disabled people (2016) 0.07
    0.074054174 = product of:
      0.14810835 = sum of:
        0.14810835 = sum of:
          0.10692415 = weight(_text_:mining in 2639) [ClassicSimilarity], result of:
            0.10692415 = score(doc=2639,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.37404498 = fieldWeight in 2639, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.046875 = fieldNorm(doc=2639)
          0.0411842 = weight(_text_:22 in 2639) [ClassicSimilarity], result of:
            0.0411842 = score(doc=2639,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.23214069 = fieldWeight in 2639, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2639)
      0.5 = coord(1/2)
    
    Abstract
    Online communities are becoming an important tool in the communication and participation processes in our society. However, the most widespread applications are difficult to use for people with disabilities, or may involve some risks if no previous training has been undertaken. This work describes a novel social network for cognitively disabled people along with a clustering-based method for modeling activity and socialization processes of its users in a noninvasive way. This closed social network is specifically designed for people with cognitive disabilities, called Guremintza, that provides the network administrators (e.g., social workers) with two types of reports: summary statistics of the network usage and behavior patterns discovered by a data mining process. Experiments made in an initial stage of the network show that the discovered patterns are meaningful to the social workers and they find them useful in monitoring the progress of the users.
    Date
    22. 1.2016 12:02:26
  18. Priss, U.: Description logic and faceted knowledge representation (1999) 0.07
    0.074054174 = product of:
      0.14810835 = sum of:
        0.14810835 = sum of:
          0.10692415 = weight(_text_:mining in 2655) [ClassicSimilarity], result of:
            0.10692415 = score(doc=2655,freq=2.0), product of:
              0.28585905 = queryWeight, product of:
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.05066224 = queryNorm
              0.37404498 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.642448 = idf(docFreq=425, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
          0.0411842 = weight(_text_:22 in 2655) [ClassicSimilarity], result of:
            0.0411842 = score(doc=2655,freq=2.0), product of:
              0.17741053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05066224 = queryNorm
              0.23214069 = fieldWeight in 2655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2655)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" was introduced into the field of library classification systems by Ranganathan in the 1930's [Ranganathan, 1962]. A facet is a viewpoint or aspect. In contrast to traditional classification systems, faceted systems are modular in that a domain is analyzed in terms of baseline facets which are then synthesized. In this paper, the term "facet" is used in a broader meaning. Facets can describe different aspects on the same level of abstraction or the same aspect on different levels of abstraction. The notion of facets is related to database views, multicontexts and conceptual scaling in formal concept analysis [Ganter and Wille, 1999], polymorphism in object-oriented design, aspect-oriented programming, views and contexts in description logic and semantic networks. This paper presents a definition of facets in terms of faceted knowledge representation that incorporates the traditional narrower notion of facets and potentially facilitates translation between different knowledge representation formalisms. A goal of this approach is a modular, machine-aided knowledge base design mechanism. A possible application is faceted thesaurus construction for information retrieval and data mining. Reasoning complexity depends on the size of the modules (facets). A more general analysis of complexity will be left for future research.
    Date
    22. 1.2016 17:30:31
  19. Varathan, K.D.; Giachanou, A.; Crestani, F.: Comparative opinion mining : a review (2017) 0.07
    0.07388068 = product of:
      0.14776136 = sum of:
        0.14776136 = product of:
          0.29552272 = sum of:
            0.29552272 = weight(_text_:mining in 3540) [ClassicSimilarity], result of:
              0.29552272 = score(doc=3540,freq=22.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                1.0338057 = fieldWeight in 3540, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion mining refers to the use of natural language processing, text analysis, and computational linguistics to identify and extract subjective information in textual material. Opinion mining, also known as sentiment analysis, has received a lot of attention in recent times, as it provides a number of tools to analyze public opinion on a number of different topics. Comparative opinion mining is a subfield of opinion mining which deals with identifying and extracting information that is expressed in a comparative form (e.g., "paper X is better than the Y"). Comparative opinion mining plays a very important role when one tries to evaluate something because it provides a reference point for the comparison. This paper provides a review of the area of comparative opinion mining. It is the first review that cover specifically this topic as all previous reviews dealt mostly with general opinion mining. This survey covers comparative opinion mining from two different angles. One from the perspective of techniques and the other from the perspective of comparative opinion elements. It also incorporates preprocessing tools as well as data set that were used by past researchers that can be useful to future researchers in the field of comparative opinion mining.
    Theme
    Data Mining
  20. Malaise, V.; Zweigenbaum, P.; Bachimont, B.: Mining defining contexts to help structuring differential ontologies (2005) 0.07
    0.07128276 = product of:
      0.14256552 = sum of:
        0.14256552 = product of:
          0.28513104 = sum of:
            0.28513104 = weight(_text_:mining in 6598) [ClassicSimilarity], result of:
              0.28513104 = score(doc=6598,freq=2.0), product of:
                0.28585905 = queryWeight, product of:
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.05066224 = queryNorm
                0.9974533 = fieldWeight in 6598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.642448 = idf(docFreq=425, maxDocs=44218)
                  0.125 = fieldNorm(doc=6598)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

Types

  • a 2180
  • m 186
  • s 130
  • el 88
  • b 31
  • r 11
  • x 8
  • i 3
  • n 2
  • p 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications