Search (141 results, page 1 of 8)

  • × theme_ss:"Data Mining"
  1. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.06
    0.06092712 = product of:
      0.18278135 = sum of:
        0.012109872 = weight(_text_:web in 1833) [ClassicSimilarity], result of:
          0.012109872 = score(doc=1833,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.108171105 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.0070079383 = weight(_text_:information in 1833) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=1833,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 1833, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.011278915 = weight(_text_:system in 1833) [ClassicSimilarity], result of:
          0.011278915 = score(doc=1833,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.104393914 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.15238462 = sum of:
          0.13844152 = weight(_text_:aufsatzsammlung in 1833) [ClassicSimilarity], result of:
            0.13844152 = score(doc=1833,freq=16.0), product of:
              0.2250708 = queryWeight, product of:
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.03430388 = queryNorm
              0.61510205 = fieldWeight in 1833, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.013943106 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.013943106 = score(doc=1833,freq=2.0), product of:
              0.120126344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03430388 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.33333334 = coord(4/12)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
    LCSH
    Information technology / Management / Congresses
    RSWK
    Mediendokumentation / Aufsatzsammlung
    Medien / Informationsmanagement / Aufsatzsammlung
    Pressearchiv / Aufsatzsammlung (HBZ)
    Rundfunkarchiv / Aufsatzsammlung (HBZ)
    Subject
    Mediendokumentation / Aufsatzsammlung
    Medien / Informationsmanagement / Aufsatzsammlung
    Pressearchiv / Aufsatzsammlung (HBZ)
    Rundfunkarchiv / Aufsatzsammlung (HBZ)
    Information technology / Management / Congresses
  2. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.06
    0.05904416 = product of:
      0.23617664 = sum of:
        0.016351856 = weight(_text_:information in 4577) [ClassicSimilarity], result of:
          0.016351856 = score(doc=4577,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.27153665 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.18729086 = weight(_text_:extraction in 4577) [ClassicSimilarity], result of:
          0.18729086 = score(doc=4577,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.9189739 = fieldWeight in 4577, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.109375 = fieldNorm(doc=4577)
        0.032533914 = product of:
          0.06506783 = sum of:
            0.06506783 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.06506783 = score(doc=4577,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Date
    2. 4.2000 18:01:22
  3. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.04
    0.04313182 = product of:
      0.17252728 = sum of:
        0.076589555 = weight(_text_:web in 4242) [ClassicSimilarity], result of:
          0.076589555 = score(doc=4242,freq=20.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.6841342 = fieldWeight in 4242, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
        0.015670227 = weight(_text_:information in 4242) [ClassicSimilarity], result of:
          0.015670227 = score(doc=4242,freq=10.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.2602176 = fieldWeight in 4242, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
        0.08026751 = weight(_text_:extraction in 4242) [ClassicSimilarity], result of:
          0.08026751 = score(doc=4242,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.39384598 = fieldWeight in 4242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
      0.25 = coord(3/12)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
    Source
    Annual review of information science and technology. 38(2004), S.289-330
  4. Mining text data (2012) 0.04
    0.040804163 = product of:
      0.12241249 = sum of:
        0.016146496 = weight(_text_:web in 362) [ClassicSimilarity], result of:
          0.016146496 = score(doc=362,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.14422815 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.0066071474 = weight(_text_:information in 362) [ClassicSimilarity], result of:
          0.0066071474 = score(doc=362,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.10971737 = fieldWeight in 362, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.053511675 = weight(_text_:extraction in 362) [ClassicSimilarity], result of:
          0.053511675 = score(doc=362,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.26256397 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.04614717 = product of:
          0.09229434 = sum of:
            0.09229434 = weight(_text_:aufsatzsammlung in 362) [ClassicSimilarity], result of:
              0.09229434 = score(doc=362,freq=4.0), product of:
                0.2250708 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03430388 = queryNorm
                0.41006804 = fieldWeight in 362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    Content
    Inhalt: An Introduction to Text Mining.- Information Extraction from Text.- A Survey of Text Summarization Techniques.- A Survey of Text Clustering Algorithms.- Dimensionality Reduction and Topic Modeling.- A Survey of Text Classification Algorithms.- Transfer Learning for Text Mining.- Probabilistic Models for Text Mining.- Mining Text Streams.- Translingual Mining from Text Data.- Text Mining in Multimedia.- Text Analytics in Social Media.- A Survey of Opinion Mining and Sentiment Analysis.- Biomedical Text Mining: A Survey of Recent Progress.- Index.
    RSWK
    Text Mining / Aufsatzsammlung
    Subject
    Text Mining / Aufsatzsammlung
  5. Cardie, C.: Empirical methods in information extraction (1997) 0.04
    0.03878909 = product of:
      0.23273453 = sum of:
        0.018687837 = weight(_text_:information in 3246) [ClassicSimilarity], result of:
          0.018687837 = score(doc=3246,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3103276 = fieldWeight in 3246, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3246)
        0.2140467 = weight(_text_:extraction in 3246) [ClassicSimilarity], result of:
          0.2140467 = score(doc=3246,freq=8.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            1.0502559 = fieldWeight in 3246, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0625 = fieldNorm(doc=3246)
      0.16666667 = coord(2/12)
    
    Abstract
    Surveys the use of empirical, machine-learning methods for information extraction. Presents a generic architecture for information extraction systems and surveys the learning algorithms that have been developed to address the problems of accuracy, portability, and knowledge acquisition for each component of the architecture
    Footnote
    Contribution to a special section reviewing recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction and machine translation
  6. Survey of text mining : clustering, classification, and retrieval (2004) 0.03
    0.03406336 = product of:
      0.13625345 = sum of:
        0.011679897 = weight(_text_:information in 804) [ClassicSimilarity], result of:
          0.011679897 = score(doc=804,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.19395474 = fieldWeight in 804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.06688959 = weight(_text_:extraction in 804) [ClassicSimilarity], result of:
          0.06688959 = score(doc=804,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.057683963 = product of:
          0.11536793 = sum of:
            0.11536793 = weight(_text_:aufsatzsammlung in 804) [ClassicSimilarity], result of:
              0.11536793 = score(doc=804,freq=4.0), product of:
                0.2250708 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03430388 = queryNorm
                0.51258504 = fieldWeight in 804, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
    LCSH
    Data mining ; Information retrieval
    RSWK
    Text Mining / Aufsatzsammlung
    Subject
    Text Mining / Aufsatzsammlung
    Data mining ; Information retrieval
  7. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.03
    0.03321625 = product of:
      0.099648744 = sum of:
        0.032292992 = weight(_text_:web in 1737) [ClassicSimilarity], result of:
          0.032292992 = score(doc=1737,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.2884563 = fieldWeight in 1737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.018687837 = weight(_text_:information in 1737) [ClassicSimilarity], result of:
          0.018687837 = score(doc=1737,freq=8.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3103276 = fieldWeight in 1737, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.030077105 = weight(_text_:system in 1737) [ClassicSimilarity], result of:
          0.030077105 = score(doc=1737,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.27838376 = fieldWeight in 1737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.01859081 = product of:
          0.03718162 = sum of:
            0.03718162 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.03718162 = score(doc=1737,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.33333334 = coord(4/12)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
  8. Miao, Q.; Li, Q.; Zeng, D.: Fine-grained opinion mining by integrating multiple review sources (2010) 0.03
    0.03251943 = product of:
      0.13007772 = sum of:
        0.02825637 = weight(_text_:web in 4104) [ClassicSimilarity], result of:
          0.02825637 = score(doc=4104,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.25239927 = fieldWeight in 4104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4104)
        0.008175928 = weight(_text_:information in 4104) [ClassicSimilarity], result of:
          0.008175928 = score(doc=4104,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 4104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4104)
        0.09364543 = weight(_text_:extraction in 4104) [ClassicSimilarity], result of:
          0.09364543 = score(doc=4104,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.45948696 = fieldWeight in 4104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4104)
      0.25 = coord(3/12)
    
    Abstract
    With the rapid development of Web 2.0, online reviews have become extremely valuable sources for mining customers' opinions. Fine-grained opinion mining has attracted more and more attention of both applied and theoretical research. In this article, the authors study how to automatically mine product features and opinions from multiple review sources. Specifically, they propose an integration strategy to solve the issue. Within the integration strategy, the authors mine domain knowledge from semistructured reviews and then exploit the domain knowledge to assist product feature extraction and sentiment orientation identification from unstructured reviews. Finally, feature-opinion tuples are generated. Experimental results on real-world datasets show that the proposed approach is effective.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2288-2299
  9. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.03
    0.03186787 = product of:
      0.12747148 = sum of:
        0.04513083 = weight(_text_:web in 605) [ClassicSimilarity], result of:
          0.04513083 = score(doc=605,freq=10.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.40312994 = fieldWeight in 605, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
        0.015451051 = weight(_text_:information in 605) [ClassicSimilarity], result of:
          0.015451051 = score(doc=605,freq=14.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.256578 = fieldWeight in 605, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
        0.06688959 = weight(_text_:extraction in 605) [ClassicSimilarity], result of:
          0.06688959 = score(doc=605,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.32820496 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=605)
      0.25 = coord(3/12)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1838-1850
  10. Zhang, Z.; Li, Q.; Zeng, D.; Ga, H.: Extracting evolutionary communities in community question answering (2014) 0.03
    0.031223597 = product of:
      0.12489439 = sum of:
        0.02018312 = weight(_text_:web in 1286) [ClassicSimilarity], result of:
          0.02018312 = score(doc=1286,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.18028519 = fieldWeight in 1286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1286)
        0.010115089 = weight(_text_:information in 1286) [ClassicSimilarity], result of:
          0.010115089 = score(doc=1286,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.16796975 = fieldWeight in 1286, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1286)
        0.09459618 = weight(_text_:extraction in 1286) [ClassicSimilarity], result of:
          0.09459618 = score(doc=1286,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.46415195 = fieldWeight in 1286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1286)
      0.25 = coord(3/12)
    
    Abstract
    With the rapid growth of Web 2.0, community question answering (CQA) has become a prevalent information seeking channel, in which users form interactive communities by posting questions and providing answers. Communities may evolve over time, because of changes in users' interests, activities, and new users joining the network. To better understand user interactions in CQA communities, it is necessary to analyze the community structures and track community evolution over time. Existing work in CQA focuses on question searching or content quality detection, and the important problems of community extraction and evolutionary pattern detection have not been studied. In this article, we propose a probabilistic community model (PCM) to extract overlapping community structures and capture their evolution patterns in CQA. The empirical results show that our algorithm appears to improve the community extraction quality. We show empirically, using the iPhone data set, that interesting community evolution patterns can be discovered, with each evolution pattern reflecting the variation of users' interests over time. Our analysis suggests that individual users could benefit to gain comprehensive information from tracking the transition of products. We also show that the communities provide a decision-making basis for business.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1170-1186
  11. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.03
    0.02995519 = product of:
      0.11982076 = sum of:
        0.05821702 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.05821702 = score(doc=354,freq=26.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.008092071 = weight(_text_:information in 354) [ClassicSimilarity], result of:
          0.008092071 = score(doc=354,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1343758 = fieldWeight in 354, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.053511675 = weight(_text_:extraction in 354) [ClassicSimilarity], result of:
          0.053511675 = score(doc=354,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.26256397 = fieldWeight in 354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
      0.25 = coord(3/12)
    
    Abstract
    Web mining aims to discover useful information and knowledge from the Web hyperlink structure, page contents, and usage data. Although Web mining uses many conventional data mining techniques, it is not purely an application of traditional data mining due to the semistructured and unstructured nature of the Web data and its heterogeneity. It has also developed many of its own algorithms and techniques. Liu has written a comprehensive text on Web data mining. Key topics of structure mining, content mining, and usage mining are covered both in breadth and in depth. His book brings together all the essential concepts and algorithms from related areas such as data mining, machine learning, and text processing to form an authoritative and coherent text. The book offers a rich blend of theory and practice, addressing seminal research ideas, as well as examining the technology from a practical point of view. It is suitable for students, researchers and practitioners interested in Web mining both as a learning text and a reference book. Lecturers can readily use it for classes on data mining, Web mining, and Web search. Additional teaching materials such as lecture slides, datasets, and implemented algorithms are available online.
    Content
    Inhalt: 1. Introduction 2. Association Rules and Sequential Patterns 3. Supervised Learning 4. Unsupervised Learning 5. Partially Supervised Learning 6. Information Retrieval and Web Search 7. Social Network Analysis 8. Web Crawling 9. Structured Data Extraction: Wrapper Generation 10. Information Integration
    RSWK
    World Wide Web / Data Mining
    Subject
    World Wide Web / Data Mining
  12. Wu, T.; Pottenger, W.M.: ¬A semi-supervised active learning algorithm for information extraction from textual data (2005) 0.03
    0.027312431 = product of:
      0.16387458 = sum of:
        0.014304894 = weight(_text_:information in 3237) [ClassicSimilarity], result of:
          0.014304894 = score(doc=3237,freq=12.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23754507 = fieldWeight in 3237, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3237)
        0.14956969 = weight(_text_:extraction in 3237) [ClassicSimilarity], result of:
          0.14956969 = score(doc=3237,freq=10.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.7338887 = fieldWeight in 3237, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3237)
      0.16666667 = coord(2/12)
    
    Abstract
    In this article we present a semi-supervised active learning algorithm for pattern discovery in information extraction from textual data. The patterns are reduced regular expressions composed of various characteristics of features useful in information extraction. Our major contribution is a semi-supervised learning algorithm that extracts information from a set of examples labeled as relevant or irrelevant to a given attribute. The approach is semi-supervised because it does not require precise labeling of the exact location of features in the training data. This significantly reduces the effort needed to develop a training set. An active learning algorithm is used to assist the semi-supervised learning algorithm to further reduce the training set development effort. The active learning algorithm is seeded with a Single positive example of a given attribute. The context of the seed is used to automatically identify candidates for additional positive examples of the given attribute. Candidate examples are manually pruned during the active learning phase, and our semi-supervised learning algorithm automatically discovers reduced regular expressions for each attribute. We have successfully applied this learning technique in the extraction of textual features from police incident reports, university crime reports, and patents. The performance of our algorithm compares favorably with competitive extraction systems being used in criminal justice information systems.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.258-271
  13. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.02
    0.023435093 = product of:
      0.14061056 = sum of:
        0.008175928 = weight(_text_:information in 5481) [ClassicSimilarity], result of:
          0.008175928 = score(doc=5481,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 5481, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
        0.13243464 = weight(_text_:extraction in 5481) [ClassicSimilarity], result of:
          0.13243464 = score(doc=5481,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.6498127 = fieldWeight in 5481, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
      0.16666667 = coord(2/12)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
  14. Fenstermacher, K.D.; Ginsburg, M.: Client-side monitoring for Web mining (2003) 0.02
    0.0234113 = product of:
      0.0936452 = sum of:
        0.064079426 = weight(_text_:web in 1611) [ClassicSimilarity], result of:
          0.064079426 = score(doc=1611,freq=14.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.57238775 = fieldWeight in 1611, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.0070079383 = weight(_text_:information in 1611) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=1611,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.02255783 = weight(_text_:system in 1611) [ClassicSimilarity], result of:
          0.02255783 = score(doc=1611,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.20878783 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
      0.25 = coord(3/12)
    
    Abstract
    "Garbage in, garbage out" is a well-known phrase in computer analysis, and one that comes to mind when mining Web data to draw conclusions about Web users. The challenge is that data analysts wish to infer patterns of client-side behavior from server-side data. However, because only a fraction of the user's actions ever reaches the Web server, analysts must rely an incomplete data. In this paper, we propose a client-side monitoring system that is unobtrusive and supports flexible data collection. Moreover, the proposed framework encompasses client-side applications beyond the Web browser. Expanding monitoring beyond the browser to incorporate standard office productivity tools enables analysts to derive a much richer and more accurate picture of user behavior an the Web.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.625-637
  15. Bath, P.A.: Data mining in health and medical information (2003) 0.02
    0.0213195 = product of:
      0.12791699 = sum of:
        0.020893635 = weight(_text_:information in 4263) [ClassicSimilarity], result of:
          0.020893635 = score(doc=4263,freq=10.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3469568 = fieldWeight in 4263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4263)
        0.10702335 = weight(_text_:extraction in 4263) [ClassicSimilarity], result of:
          0.10702335 = score(doc=4263,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.52512795 = fieldWeight in 4263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0625 = fieldNorm(doc=4263)
      0.16666667 = coord(2/12)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Source
    Annual review of information science and technology. 38(2004), S.331-370
  16. Tonkin, E.L.; Tourte, G.J.L.: Working with text. tools, techniques and approaches for text mining (2016) 0.02
    0.021185271 = product of:
      0.084741086 = sum of:
        0.008258934 = weight(_text_:information in 4019) [ClassicSimilarity], result of:
          0.008258934 = score(doc=4019,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 4019, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
        0.018798191 = weight(_text_:system in 4019) [ClassicSimilarity], result of:
          0.018798191 = score(doc=4019,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.17398985 = fieldWeight in 4019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
        0.057683963 = product of:
          0.11536793 = sum of:
            0.11536793 = weight(_text_:aufsatzsammlung in 4019) [ClassicSimilarity], result of:
              0.11536793 = score(doc=4019,freq=4.0), product of:
                0.2250708 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03430388 = queryNorm
                0.51258504 = fieldWeight in 4019, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4019)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    What is text mining, and how can it be used? What relevance do these methods have to everyday work in information science and the digital humanities? How does one develop competences in text mining? Working with Text provides a series of cross-disciplinary perspectives on text mining and its applications. As text mining raises legal and ethical issues, the legal background of text mining and the responsibilities of the engineer are discussed in this book. Chapters provide an introduction to the use of the popular GATE text mining package with data drawn from social media, the use of text mining to support semantic search, the development of an authority system to support content tagging, and recent techniques in automatic language evaluation. Focused studies describe text mining on historical texts, automated indexing using constrained vocabularies, and the use of natural language processing to explore the climate science literature. Interviews are included that offer a glimpse into the real-life experience of working within commercial and academic text mining.
    RSWK
    Text Mining / Aufsatzsammlung
    Series
    Chandos Information Professional Series
    Subject
    Text Mining / Aufsatzsammlung
  17. Benoit, G.: Data mining (2002) 0.02
    0.020942254 = product of:
      0.12565352 = sum of:
        0.012138106 = weight(_text_:information in 4296) [ClassicSimilarity], result of:
          0.012138106 = score(doc=4296,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.20156369 = fieldWeight in 4296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
        0.113515414 = weight(_text_:extraction in 4296) [ClassicSimilarity], result of:
          0.113515414 = score(doc=4296,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.55698234 = fieldWeight in 4296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
      0.16666667 = coord(2/12)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
    Source
    Annual review of information science and technology. 36(2002), S.265-312
  18. Liu, Y.; Huang, X.; An, A.: Personalized recommendation with adaptive mixture of markov models (2007) 0.02
    0.019266497 = product of:
      0.07706599 = sum of:
        0.04036624 = weight(_text_:web in 606) [ClassicSimilarity], result of:
          0.04036624 = score(doc=606,freq=8.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.36057037 = fieldWeight in 606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
        0.010115089 = weight(_text_:information in 606) [ClassicSimilarity], result of:
          0.010115089 = score(doc=606,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.16796975 = fieldWeight in 606, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
        0.026584659 = weight(_text_:system in 606) [ClassicSimilarity], result of:
          0.026584659 = score(doc=606,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.24605882 = fieldWeight in 606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=606)
      0.25 = coord(3/12)
    
    Abstract
    With more and more information available on the Internet, the task of making personalized recommendations to assist the user's navigation has become increasingly important. Considering there might be millions of users with different backgrounds accessing a Web site everyday, it is infeasible to build a separate recommendation system for each user. To address this problem, clustering techniques can first be employed to discover user groups. Then, user navigation patterns for each group can be discovered, to allow the adaptation of a Web site to the interest of each individual group. In this paper, we propose to model user access sequences as stochastic processes, and a mixture of Markov models based approach is taken to cluster users and to capture the sequential relationships inherent in user access histories. Several important issues that arise in constructing the Markov models are also addressed. The first issue lies in the complexity of the mixture of Markov models. To improve the efficiency of building/maintaining the mixture of Markov models, we develop a lightweight adapt-ive algorithm to update the model parameters without recomputing model parameters from scratch. The second issue concerns the proper selection of training data for building the mixture of Markov models. We investigate two different training data selection strategies and perform extensive experiments to compare their effectiveness on a real dataset that is generated by a Web-based knowledge management system, Livelink.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1851-1870
  19. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.02
    0.017942451 = product of:
      0.1076547 = sum of:
        0.013058522 = weight(_text_:information in 4367) [ClassicSimilarity], result of:
          0.013058522 = score(doc=4367,freq=10.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.21684799 = fieldWeight in 4367, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
        0.09459618 = weight(_text_:extraction in 4367) [ClassicSimilarity], result of:
          0.09459618 = score(doc=4367,freq=4.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.46415195 = fieldWeight in 4367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
      0.16666667 = coord(2/12)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.727-737
  20. Raghavan, V.V.; Deogun, J.S.; Sever, H.: Knowledge discovery and data mining : introduction (1998) 0.02
    0.016970228 = product of:
      0.10182136 = sum of:
        0.008175928 = weight(_text_:information in 2899) [ClassicSimilarity], result of:
          0.008175928 = score(doc=2899,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 2899, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2899)
        0.09364543 = weight(_text_:extraction in 2899) [ClassicSimilarity], result of:
          0.09364543 = score(doc=2899,freq=2.0), product of:
            0.20380433 = queryWeight, product of:
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.03430388 = queryNorm
            0.45948696 = fieldWeight in 2899, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.941145 = idf(docFreq=315, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2899)
      0.16666667 = coord(2/12)
    
    Abstract
    Defines knowledge discovery and database mining. The challenge for knowledge discovery in databases (KDD) is to automatically process large quantities of raw data, identifying the most significant and meaningful patterns, and present these as as knowledge appropriate for achieving a user's goals. Data mining is the process of deriving useful knowledge from real world databases through the application of pattern extraction techniques. Explains the goals of, and motivation for, research work on data mining. Discusses the nature of database contents, along with problems within the field of data mining
    Source
    Journal of the American Society for Information Science. 49(1998) no.5, S.397-402

Years

Languages

  • e 115
  • d 25
  • sp 1
  • More… Less…

Types

  • a 119
  • m 18
  • s 14
  • el 7
  • x 1
  • More… Less…