Search (5 results, page 1 of 1)

  • × subject_ss:"Data mining"
  1. Next generation search engines : advanced models for information retrieval (2012) 0.11
    0.10650112 = product of:
      0.15975167 = sum of:
        0.090335175 = weight(_text_:search in 357) [ClassicSimilarity], result of:
          0.090335175 = score(doc=357,freq=58.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.51699156 = fieldWeight in 357, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.06941649 = product of:
          0.13883299 = sum of:
            0.13883299 = weight(_text_:engines in 357) [ClassicSimilarity], result of:
              0.13883299 = score(doc=357,freq=30.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5435314 = fieldWeight in 357, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=357)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The main goal of this book is to transfer new research results from the fields of advanced computer sciences and information science to the design of new search engines. The readers will have a better idea of the new trends in applied research. The achievement of relevant, organized, sorted, and workable answers- to name but a few - from a search is becoming a daily need for enterprises and organizations, and, to a greater extent, for anyone. It does not consist of getting access to structural information as in standard databases; nor does it consist of searching information strictly by way of a combination of key words. It goes far beyond that. Whatever its modality, the information sought should be identified by the topics it contains, that is to say by its textual, audio, video or graphical contents. This is not a new issue. However, recent technological advances have completely changed the techniques being used. New Web technologies, the emergence of Intranet systems and the abundance of information on the Internet have created the need for efficient search and information access tools.
    Recent technological progress in computer science, Web technologies, and constantly evolving information available on the Internet has drastically changed the landscape of search and access to information. Web search has significantly evolved in recent years. In the beginning, web search engines such as Google and Yahoo! were only providing search service over text documents. Aggregated search was one of the first steps to go beyond text search, and was the beginning of a new era for information seeking and retrieval. These days, new web search engines support aggregated search over a number of vertices, and blend different types of documents (e.g., images, videos) in their search results. New search engines employ advanced techniques involving machine learning, computational linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, distributed systems, social networks, statistical analysis, semantic analysis, and technologies over query sessions. Documents no longer exist on their own; they are connected to other documents, they are associated with users and their position in a social network, and they can be mapped onto a variety of ontologies. Similarly, retrieval tasks have become more interactive and are solidly embedded in a user's geospatial, social, and historical context. It is conjectured that new breakthroughs in information retrieval will not come from smarter algorithms that better exploit existing information sources, but from new retrieval algorithms that can intelligently use and combine new sources of contextual metadata.
    With the rapid growth of web-based applications, such as search engines, Facebook, and Twitter, the development of effective and personalized information retrieval techniques and of user interfaces is essential. The amount of shared information and of social networks has also considerably grown, requiring metadata for new sources of information, like Wikipedia and ODP. These metadata have to provide classification information for a wide range of topics, as well as for social networking sites like Twitter, and Facebook, each of which provides additional preferences, tagging information and social contexts. Due to the explosion of social networks and other metadata sources, it is an opportune time to identify ways to exploit such metadata in IR tasks such as user modeling, query understanding, and personalization, to name a few. Although the use of traditional metadata such as html text, web page titles, and anchor text is fairly well-understood, the use of category information, user behavior data, and geographical information is just beginning to be studied. This book is intended for scientists and decision-makers who wish to gain working knowledge about search engines in order to evaluate available solutions and to dialogue with software and data providers.
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
    Vert, S.: Extensions of Web browsers useful to knowledge workers. Chen, L.-C.: Next generation search engine for the result clustering technology. Biskri, I., L. Rompré: Using association rules for query reformulation. Habernal, I., M. Konopík u. O. Rohlík: Question answering. Grau, B.: Finding answers to questions, in text collections or Web, in open domain or specialty domains. Berri, J., R. Benlamri: Context-aware mobile search engine. Bouidghaghen, O., L. Tamine: Spatio-temporal based personalization for mobile search. Chaudiron, S., M. Ihadjadene: Studying Web search engines from a user perspective: key concepts and main approaches. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/59723.
    LCSH
    Search engines
    Subject
    Search engines
  2. Stuart, D.: Web metrics for library and information professionals (2014) 0.05
    0.050775357 = product of:
      0.07616303 = sum of:
        0.040676784 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
          0.040676784 = score(doc=2274,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.23279473 = fieldWeight in 2274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.035486247 = product of:
          0.070972495 = sum of:
            0.070972495 = weight(_text_:engines in 2274) [ClassicSimilarity], result of:
              0.070972495 = score(doc=2274,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.27785745 = fieldWeight in 2274, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
  3. Semantic applications (2018) 0.02
    0.019369897 = product of:
      0.058109686 = sum of:
        0.058109686 = weight(_text_:search in 5204) [ClassicSimilarity], result of:
          0.058109686 = score(doc=5204,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 5204, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.33333334 = coord(1/3)
    
    Abstract
    This book describes proven methodologies for developing semantic applications: software applications which explicitly or implicitly uses the semantics (i.e., the meaning) of a domain terminology in order to improve usability, correctness, and completeness. An example is semantic search, where synonyms and related terms are used for enriching the results of a simple text-based search. Ontologies, thesauri or controlled vocabularies are the centerpiece of semantic applications. The book includes technological and architectural best practices for corporate use.
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
  4. Tonkin, E.L.; Tourte, G.J.L.: Working with text. tools, techniques and approaches for text mining (2016) 0.01
    0.011183213 = product of:
      0.03354964 = sum of:
        0.03354964 = weight(_text_:search in 4019) [ClassicSimilarity], result of:
          0.03354964 = score(doc=4019,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 4019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
      0.33333334 = coord(1/3)
    
    Abstract
    What is text mining, and how can it be used? What relevance do these methods have to everyday work in information science and the digital humanities? How does one develop competences in text mining? Working with Text provides a series of cross-disciplinary perspectives on text mining and its applications. As text mining raises legal and ethical issues, the legal background of text mining and the responsibilities of the engineer are discussed in this book. Chapters provide an introduction to the use of the popular GATE text mining package with data drawn from social media, the use of text mining to support semantic search, the development of an authority system to support content tagging, and recent techniques in automatic language evaluation. Focused studies describe text mining on historical texts, automated indexing using constrained vocabularies, and the use of natural language processing to explore the climate science literature. Interviews are included that offer a glimpse into the real-life experience of working within commercial and academic text mining.
  5. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0022704287 = product of:
      0.006811286 = sum of:
        0.006811286 = product of:
          0.013622572 = sum of:
            0.013622572 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.013622572 = score(doc=1789,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    23. 3.2008 19:10:22

Types