Search (35 results, page 1 of 2)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.02
    0.021334989 = product of:
      0.032002483 = sum of:
        0.01847945 = weight(_text_:on in 2428) [ClassicSimilarity], result of:
          0.01847945 = score(doc=2428,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 2428, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2428)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.027046064 = score(doc=2428,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2006, held in Alicante, Spain in September 2006. The 36 revised full papers presented together with the extended abstracts of 18 demo papers and 15 revised poster papers were carefully reviewed and selected from a total of 159 submissions. The papers are organized in topical sections on architectures, preservation, retrieval, applications, methodology, metadata, evaluation, user studies, modeling, audiovisual content, and language technologies.
    Content
    Inhalt u.a.: Architectures I Preservation Retrieval - The Use of Summaries in XML Retrieval / Zoltdn Szldvik, Anastasios Tombros, Mounia Laimas - An Enhanced Search Interface for Information Discovery from Digital Libraries / Georgia Koutrika, Alkis Simitsis - The TIP/Greenstone Bridge: A Service for Mobile Location-Based Access to Digital Libraries / Annika Hinze, Xin Gao, David Bainbridge Architectures II Applications Methodology Metadata Evaluation User Studies Modeling Audiovisual Content Language Technologies - Incorporating Cross-Document Relationships Between Sentences for Single Document Summarizations / Xiaojun Wan, Jianwu Yang, Jianguo Xiao - Semantic Web Techniques for Multiple Views on Heterogeneous Collections: A Case Study / Marjolein van Gendt, Antoine Isaac, Lourens van der Meij, Stefan Schlobach Posters - A Tool for Converting from MARC to FRBR / Trond Aalberg, Frank Berg Haugen, Ole Husby
  2. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.020648528 = product of:
      0.03097279 = sum of:
        0.016333679 = weight(_text_:on in 150) [ClassicSimilarity], result of:
          0.016333679 = score(doc=150,freq=12.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14880852 = fieldWeight in 150, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.014639112 = product of:
          0.029278224 = sum of:
            0.029278224 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.029278224 = score(doc=150,freq=6.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  3. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.02
    0.019074293 = product of:
      0.028611438 = sum of:
        0.015088406 = weight(_text_:on in 2426) [ClassicSimilarity], result of:
          0.015088406 = score(doc=2426,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 2426, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2426)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
              0.027046064 = score(doc=2426,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 2426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2426)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 7th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2003, held in Trondheim, Norway in August 2003. The 39 revised full papers and 8 revised short papers presented were carefully reviewed and selected from 161 submissions. The papers are organized in topical sections on uses, users, and user interfaces; metadata applications; annotation and recommendation; automatic classification and indexing; Web technologies; topical crawling and subject gateways; architectures and systems; knowledge organization; collection building and management; information retrieval; digital preservation; and indexing and searching of special documents and collection information.
  4. IEEE symposium on information visualization 2003 : Seattle, Washington, October 19 - 21, 2003 ; InfoVis 2003. Proceedings (2003) 0.01
    0.010058938 = product of:
      0.030176813 = sum of:
        0.030176813 = weight(_text_:on in 1455) [ClassicSimilarity], result of:
          0.030176813 = score(doc=1455,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 1455, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=1455)
      0.33333334 = coord(1/3)
    
    Issue
    Sponsored by IEEE Computer Society Technical Committee on Visualization and Graphics.
  5. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.008064049 = product of:
      0.0120960735 = sum of:
        0.005334557 = weight(_text_:on in 1789) [ClassicSimilarity], result of:
          0.005334557 = score(doc=1789,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.048600662 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.006761516 = product of:
          0.013523032 = sum of:
            0.013523032 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.013523032 = score(doc=1789,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
  6. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 7) [ClassicSimilarity], result of:
          0.021338228 = score(doc=7,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 7, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.33333334 = coord(1/3)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
  7. Grossman, D.A.; Frieder, O.: Information retrieval : algorithms and heuristics (2004) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 1486) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1486,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1486, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1486)
      0.33333334 = coord(1/3)
    
    Abstract
    Interested in how an efficient search engine works? Want to know what algorithms are used to rank resulting documents in response to user requests? The authors answer these and other key information on retrieval design and implementation questions is provided. This book is not yet another high level text. Instead, algorithms are thoroughly described, making this book ideally suited for both computer science students and practitioners who work on search-related applications. As stated in the foreword, this book provides a current, broad, and detailed overview of the field and is the only one that does so. Examples are used throughout to illustrate the algorithms. The authors explain how a query is ranked against a document collection using either a single or a combination of retrieval strategies, and how an assortment of utilities are integrated into the query processing scheme to improve these rankings. Methods for building and compressing text indexes, querying and retrieving documents in multiple languages, and using parallel or distributed processing to expedite the search are likewise described. This edition is a major expansion of the one published in 1998. Neuaufl. 2005: Besides updating the entire book with current techniques, it includes new sections on language models, cross-language information retrieval, peer-to-peer processing, XML search, mediators, and duplicate document detection.
    Series
    Kluwer international series on information retrieval ; 15
  8. Braun, E.: ¬The Internet directory : [the guide with the most complete listings for: 1500+ Internet and Bitnet mailing lists, 2700+ Usenet newsgroups, 1000+ On-line library catalogs (OPACs) ...] (1994) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 1549) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1549,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=1549)
      0.33333334 = coord(1/3)
    
  9. Levy, S.: In the plex : how Google thinks, works, and shapes our lives (2011) 0.01
    0.0069582528 = product of:
      0.020874757 = sum of:
        0.020874757 = weight(_text_:on in 9) [ClassicSimilarity], result of:
          0.020874757 = score(doc=9,freq=10.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19018018 = fieldWeight in 9, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=9)
      0.33333334 = coord(1/3)
    
    Abstract
    Few companies in history have ever been as successful and as admired as Google, the company that has transformed the Internet and become an indispensable part of our lives. How has Google done it? Veteran technology reporter Steven Levy was granted unprecedented access to the company, and in this revelatory book he takes readers inside Google headquarters-the Googleplex-to show how Google works. While they were still students at Stanford, Google cofounders Larry Page and Sergey Brin revolutionized Internet search. They followed this brilliant innovation with another, as two of Google's earliest employees found a way to do what no one else had: make billions of dollars from Internet advertising. With this cash cow (until Google's IPO nobody other than Google management had any idea how lucrative the company's ad business was), Google was able to expand dramatically and take on other transformative projects: more efficient data centers, open-source cell phones, free Internet video (YouTube), cloud computing, digitizing books, and much more. The key to Google's success in all these businesses, Levy reveals, is its engineering mind-set and adoption of such Internet values as speed, openness, experimentation, and risk taking. After its unapologetically elitist approach to hiring, Google pampers its engineers-free food and dry cleaning, on-site doctors and masseuses-and gives them all the resources they need to succeed. Even today, with a workforce of more than 23,000, Larry Page signs off on every hire. But has Google lost its innovative edge? It stumbled badly in China-Levy discloses what went wrong and how Brin disagreed with his peers on the China strategy-and now with its newest initiative, social networking, Google is chasing a successful competitor for the first time. Some employees are leaving the company for smaller, nimbler start-ups. Can the company that famously decided not to be evil still compete? No other book has ever turned Google inside out as Levy does with In the Plex.
    Content
    The world according to Google: biography of a search engine -- Googlenomics: cracking the code on internet profits -- Don't be evil: how Google built its culture -- Google's cloud: how Google built data centers and killed the hard drive -- Outside the box: the Google phone company. and the Google t.v. company -- Guge: Google moral dilemma in China -- Google.gov: is what's good for Google, good for government or the public? -- Epilogue: chasing tail lights: trying to crack the social code.
  10. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.0067615155 = product of:
      0.020284547 = sum of:
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.040569093 = score(doc=1781,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:35:21
  11. White, R.W.; Roth, R.A.: Exploratory search : beyond the query-response paradigm (2009) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 0) [ClassicSimilarity], result of:
          0.01886051 = score(doc=0,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 0, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=0)
      0.33333334 = coord(1/3)
    
    Abstract
    As information becomes more ubiquitous and the demands that searchers have on search systems grow, there is a need to support search behaviors beyond simple lookup. Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Exploratory search describes an information-seeking problem context that is open-ended, persistent, and multifaceted, and information-seeking processes that are opportunistic, iterative, and multitactical. Exploratory searchers aim to solve complex problems and develop enhanced mental capacities. Exploratory search systems support this through symbiotic human-machine relationships that provide guidance in exploring unfamiliar information landscapes. Exploratory search has gained prominence in recent years. There is an increased interest from the information retrieval, information science, and human-computer interaction communities in moving beyond the traditional turn-taking interaction model supported by major Web search engines, and toward support for human intelligence amplification and information use. In this lecture, we introduce exploratory search, relate it to relevant extant research, outline the features of exploratory search systems, discuss the evaluation of these systems, and suggest some future directions for supporting exploratory search. Exploratory search is a new frontier in the search domain and is becoming increasingly important in shaping our future world.
    Series
    Synthesis lectures on information concepts, retrieval & services; 3
  12. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.0056345966 = product of:
      0.01690379 = sum of:
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.03380758 = score(doc=1397,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:29:25
  13. Bleuel, J.: Online Publizieren im Internet : elektronische Zeitschriften und Bücher (1995) 0.01
    0.0056345966 = product of:
      0.01690379 = sum of:
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 1708) [ClassicSimilarity], result of:
              0.03380758 = score(doc=1708,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 1708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1708)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 16:15:37
  14. Weinberger, D.: Everything is miscellaneous : the power of the new digital disorder (2007) 0.01
    0.0053898394 = product of:
      0.016169518 = sum of:
        0.016169518 = weight(_text_:on in 2862) [ClassicSimilarity], result of:
          0.016169518 = score(doc=2862,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14731294 = fieldWeight in 2862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2862)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: Publishers Weekly. May 2007: "In a high-minded twist on the Internet-has-changed-everything book, Weinberger (Small Pieces Loosely Joined) joins the ranks of social thinkers striving to construct new theories around the success of Google and Wikipedia. Organization or, rather, lack of it, is the key: the author insists that "we have to get rid of the idea that there's a best way of organizing the world." Building on his earlier works' discussions of the Internet-driven shift in power to users and consumers, Weinberger notes that "our homespun ways of maintaining order are going to break-they're already breaking-in the digital world." Today's avalanche of fresh information, Weinberger writes, requires relinquishing control of how we organize pretty much everything; he envisions an ever-changing array of "useful, powerful and beautiful ways to make sense of our world." Perhaps carried away by his thesis, the author gets into extended riffs on topics like the history of classification and the Dewey Decimal System. At the point where readers may want to turn his musings into strategies for living or doing business, he serves up intriguing but not exactly helpful epigrams about "the third order of order" and "useful miscellaneousness." But the book's call to embrace complexity will influence thinking about "the newly miscellanized world.""
  15. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 3185) [ClassicSimilarity], result of:
          0.016003672 = score(doc=3185,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
      0.33333334 = coord(1/3)
    
    Abstract
    With this title, readers learn to push the search engine to its limits and extract the best content from Google, without having to learn complicated code. "Google Power" takes Google users under the hood, and teaches them a wide range of advanced web search techniques, through practical examples. Its content is organised by topic, so reader learns how to conduct in-depth searches on the most popular search topics, from health to government listings to people.
  16. Research and advanced technology for digital libraries : 9th European conference, ECDL 2005, Vienna, Austria, September 18 - 23, 2005 ; proceedings (2005) 0.01
    0.005029469 = product of:
      0.015088406 = sum of:
        0.015088406 = weight(_text_:on in 2423) [ClassicSimilarity], result of:
          0.015088406 = score(doc=2423,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 2423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2423)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 9th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2005, held in Vienna, Austria in September 2005. The 41 revised full papers presented together with 2 panel papers and 30 revised poster papers were carefully reviewed and selected from a total of 162 submissions. The papers are organized in topical sections on digital library models and architectures, multimedia and hypermedia digital libraries, XML, building digital libraries, user studies, digital preservation, metadata, digital libraries and e-learning, text classification in digital libraries, searching, and text digital libraries.
  17. Research and advanced technology for digital libraries : 8th European conference, ECDL 2004, Bath, UK, September 12-17, 2004 : proceedings (2004) 0.01
    0.005029469 = product of:
      0.015088406 = sum of:
        0.015088406 = weight(_text_:on in 2427) [ClassicSimilarity], result of:
          0.015088406 = score(doc=2427,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 2427, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2427)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2004, held in Bath, UK in September 2004. The 47 revised full papers presented were carefully reviewed and selected from a total of 148 submissions. The papers are organized in topical sections on digital library architectures, evaluation and usability, user interfaces and presentation, new approaches to information retrieval, interoperability, enhanced indexing and search methods, personalization and applications, music digital libraries, personal digital libraries, innovative technologies, open archive initiative, new models and tools, and user-centered design.
  18. Research and advanced technology for digital libraries : 11th European conference, ECDL 2007 / Budapest, Hungary, September 16-21, 2007, proceedings (2007) 0.01
    0.005029469 = product of:
      0.015088406 = sum of:
        0.015088406 = weight(_text_:on in 2430) [ClassicSimilarity], result of:
          0.015088406 = score(doc=2430,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 2430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2430)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 11th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2007, held in Budapest, Hungary, in September 2007. The 36 revised full papers presented together with the extended abstracts of 36 revised poster, demo papers and 2 panel descriptions were carefully reviewed and selected from a total of 153 submissions. The papers are organized in topical sections on ontologies, digital libraries and the web, models, multimedia and multilingual DLs, grid and peer-to-peer, preservation, user interfaces, document linking, information retrieval, personal information management, new DL applications, and user studies.
  19. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.005029469 = product of:
      0.015088406 = sum of:
        0.015088406 = weight(_text_:on in 3346) [ClassicSimilarity], result of:
          0.015088406 = score(doc=3346,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 3346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.33333334 = coord(1/3)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  20. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.00
    0.004704638 = product of:
      0.014113913 = sum of:
        0.014113913 = weight(_text_:on in 172) [ClassicSimilarity], result of:
          0.014113913 = score(doc=172,freq=14.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.12858528 = fieldWeight in 172, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.015625 = fieldNorm(doc=172)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: SESSION: Building and using cultural digital libraries Primarily history: historians and the search for primary source materials (Helen R. Tibbo) - Using the Gamera framework for the recognition of cultural heritage materials (Michael Droettboom, Ichiro Fujinaga, Karl MacMillan, G. Sayeed Chouhury, Tim DiLauro, Mark Patton, Teal Anderson) - Supporting access to large digital oral history archives (Samuel Gustman, Dagobert Soergel, Douglas Oard, William Byrne, Michael Picheny, Bhuvana Ramabhadran, Douglas Greenberg) SESSION: Summarization and question answering Using sentence-selection heuristics to rank text segments in TXTRACTOR (Daniel McDonald, Hsinchun Chen) - Using librarian techniques in automatic text summarization for information retrieval (Min-Yen Kan, Judith L. Klavans) - QuASM: a system for question answering using semi-structured data (David Pinto, Michael Branstein, Ryan Coleman, W. Bruce Croft, Matthew King, Wei Li, Xing Wei) SESSION: Studying users Reading-in-the-small: a study of reading on small form factor devices (Catherine C. Marshall, Christine Ruotolo) - A graph-based recommender system for digital library (Zan Huang, Wingyan Chung, Thian-Huat Ong, Hsinchun Chen) - The effects of topic familiarity on information search behavior (Diane Kelly, Colleen Cool) SESSION: Classification and browsing A language modelling approach to relevance profiling for document browsing (David J. Harper, Sara Coulthard, Sun Yixing) - Compound descriptors in context: a matching function for classifications and thesauri (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - Structuring keyword-based queries for web databases (Rodrigo C. Vieira, Pavel Calado, Altigran S. da Silva, Alberto H. F. Laender, Berthier A. Ribeiro-Neto) - An approach to automatic classification of text for information retrieval (Hong Cui, P. Bryan Heidorn, Hong Zhang)
    SESSION: A digital libraries for education Middle school children's use of the ARTEMIS digital library (June Abbas, Cathleen Norris, Elliott Soloway) - Partnership reviewing: a cooperative approach for peer review of complex educational resources (John Weatherley, Tamara Sumner, Michael Khoo, Michael Wright, Marcel Hoffmann) - A digital library for geography examination resources (Lian-Heong Chua, Dion Hoe-Lian Goh, Ee-Peng Lim, Zehua Liu, Rebecca Pei-Hui Ang) - Digital library services for authors of learning materials (Flora McMartin, Youki Terada) SESSION: Novel search environments Integration of simultaneous searching and reference linking across bibliographic resources on the web (William H. Mischo, Thomas G. Habing, Timothy W. Cole) - Exploring discussion lists: steps and directions (Paula S. Newman) - Comparison of two approaches to building a vertical search tool: a case study in the nanotechnology domain (Michael Chau, Hsinchun Chen, Jialun Qin, Yilu Zhou, Yi Qin, Wai-Ki Sung, Daniel McDonald) SESSION: Video and multimedia digital libraries A multilingual, multimodal digital video library system (Michael R. Lyu, Edward Yau, Sam Sze) - A digital library data model for music (Natalia Minibayeva, Jon W. Dunn) - Video-cuebik: adapting image search to video shots (Alexander G. Hauptmann, Norman D. Papernick) - Virtual multimedia libraries built from the web (Neil C. Rowe) - Multi-modal information retrieval from broadcast video using OCR and speech recognition (Alexander G. Hauptmann, Rong Jin, Tobun Dorbin Ng) SESSION: OAI application Extending SDARTS: extracting metadata from web databases and interfacing with the open archives initiative (Panagiotis G. Ipeirotis, Tom Barry, Luis Gravano) - Using the open archives initiative protocols with EAD (Christopher J. Prom, Thomas G. Habing) - Preservation and transition of NCSTRL using an OAI-based architecture (H. Anan, X. Liu, K. Maly, M. Nelson, M. Zubair, J. C. French, E. Fox, P. Shivakumar) - Integrating harvesting into digital library content (David A. Smith, Anne Mahoney, Gregory Crane) SESSION: Searching across language, time, and space Harvesting translingual vocabulary mappings for multilingual digital libraries (Ray R. Larson, Fredric Gey, Aitao Chen) - Detecting events with date and place information in unstructured text (David A. Smith) - Using sharable ontology to retrieve historical images (Von-Wun Soo, Chen-Yu Lee, Jaw Jium Yeh, Ching-chih Chen) - Towards an electronic variorum edition of Cervantes' Don Quixote:: visualizations that support preparation (Rajiv Kochumman, Carlos Monroy, Richard Furuta, Arpita Goenka, Eduardo Urbina, Erendira Melgoza)
    SESSION: Federating and harvesting metadata DP9: an OAI gateway service for web crawlers (Xiaoming Liu, Kurt Maly, Mohammad Zubair, Michael L. Nelson) - The Greenstone plugin architecture (Ian H. Witten, David Bainbridge, Gordon Paynter, Stefan Boddie) - Building FLOW: federating libraries on the web (Anna Keller Gold, Karen S. Baker, Jean-Yves LeMeur, Kim Baldridge) - JAFER ToolKit project: interfacing Z39.50 and XML (Antony Corfield, Matthew Dovey, Richard Mawby, Colin Tatham) - Schema extraction from XML collections (Boris Chidlovskii) - Mirroring an OAI archive on the I2-DSI channel (Ashwini Pande, Malini Kothapalli, Ryan Richardson, Edward A. Fox) SESSION: Music digital libraries HMM-based musical query retrieval (Jonah Shifrin, Bryan Pardo, Colin Meek, William Birmingham) - A comparison of melodic database retrieval techniques using sung queries (Ning Hu, Roger B. Dannenberg) - Enhancing access to the levy sheet music collection: reconstructing full-text lyrics from syllables (Brian Wingenroth, Mark Patton, Tim DiLauro) - Evaluating automatic melody segmentation aimed at music information retrieval (Massimo Melucci, Nicola Orio) SESSION: Preserving, securing, and assessing digital libraries A methodology and system for preserving digital data (Raymond A. Lorie) - Modeling web data (James C. French) - An evaluation model for a digital library services tool (Jim Dorward, Derek Reinke, Mimi Recker) - Why watermark?: the copyright need for an engineering solution (Michael Seadle, J. R. Deller, Jr., Aparna Gurijala) SESSION: Image and cultural digital libraries Time as essence for photo browsing through personal digital libraries (Adrian Graham, Hector Garcia-Molina, Andreas Paepcke, Terry Winograd) - Toward a distributed terabyte text retrieval system in China-US million book digital library (Bin Liu, Wen Gao, Ling Zhang, Tie-jun Huang, Xiao-ming Zhang, Jun Cheng) - Enhanced perspectives for historical and cultural documentaries using informedia technologies (Howard D. Wactlar, Ching-chih Chen) - Interfaces for palmtop image search (Mark Derthick)
    SESSION: Digital libraries for spatial data The ADEPT digital library architecture (Greg Janée, James Frew) - G-Portal: a map-based digital library for distributed geospatial and georeferenced resources (Ee-Peng Lim, Dion Hoe-Lian Goh, Zehua Liu, Wee-Keong Ng, Christopher Soo-Guan Khoo, Susan Ellen Higgins) PANEL SESSION: Panels You mean I have to do what with whom: statewide museum/library DIGI collaborative digitization projects---the experiences of California, Colorado & North Carolina (Nancy Allen, Liz Bishoff, Robin Chandler, Kevin Cherry) - Overcoming impediments to effective health and biomedical digital libraries (William Hersh, Jan Velterop, Alexa McCray, Gunther Eynsenbach, Mark Boguski) - The challenges of statistical digital libraries (Cathryn Dippo, Patricia Cruse, Ann Green, Carol Hert) - Biodiversity and biocomplexity informatics: policy and implementation science versus citizen science (P. Bryan Heidorn) - Panel on digital preservation (Joyce Ray, Robin Dale, Reagan Moore, Vicky Reich, William Underwood, Alexa T. McCray) - NSDL: from prototype to production to transformational national resource (William Y. Arms, Edward Fox, Jeanne Narum, Ellen Hoffman) - How important is metadata? (Hector Garcia-Molina, Diane Hillmann, Carl Lagoze, Elizabeth Liddy, Stuart Weibel) - Planning for future digital libraries programs (Stephen M. Griffin) DEMONSTRATION SESSION: Demonstrations u.a.: FACET: thesaurus retrieval with semantic term expansion (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - MedTextus: an intelligent web-based medical meta-search system (Bin Zhu, Gondy Leroy, Hsinchun Chen, Yongchi Chen) POSTER SESSION: Posters TUTORIAL SESSION: Tutorials u.a.: Thesauri and ontologies in digital libraries: 1. structure and use in knowledge-based assistance to users (Dagobert Soergel) - How to build a digital library using open-source software (Ian H. Witten) - Thesauri and ontologies in digital libraries: 2. design, evaluation, and development (Dagobert Soergel) WORKSHOP SESSION: Workshops Document search interface design for large-scale collections and intelligent access (Javed Mostafa) - Visual interfaces to digital libraries (Katy Börner, Chaomei Chen) - Text retrieval conference (TREC) genomics pre-track workshop (William Hersh)

Languages

  • e 30
  • d 4

Types

  • m 35
  • s 18
  • i 1
  • More… Less…

Subjects

Classifications