Search (19 results, page 1 of 1)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Survey of text mining : clustering, classification, and retrieval (2004) 0.08
    0.07873234 = product of:
      0.11809851 = sum of:
        0.08966068 = weight(_text_:systematic in 804) [ClassicSimilarity], result of:
          0.08966068 = score(doc=804,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 804) [ClassicSimilarity], result of:
              0.05687567 = score(doc=804,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 804, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  2. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.03
    0.032956343 = product of:
      0.098869026 = sum of:
        0.098869026 = sum of:
          0.071942665 = weight(_text_:indexing in 2426) [ClassicSimilarity], result of:
            0.071942665 = score(doc=2426,freq=10.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.3782744 = fieldWeight in 2426, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
          0.026926363 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
            0.026926363 = score(doc=2426,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.15476047 = fieldWeight in 2426, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2426)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 7th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2003, held in Trondheim, Norway in August 2003. The 39 revised full papers and 8 revised short papers presented were carefully reviewed and selected from 161 submissions. The papers are organized in topical sections on uses, users, and user interfaces; metadata applications; annotation and recommendation; automatic classification and indexing; Web technologies; topical crawling and subject gateways; architectures and systems; knowledge organization; collection building and management; information retrieval; digital preservation; and indexing and searching of special documents and collection information.
    Content
    Inhalt: Uses, Users, and User Interaction Metadata Applications - Semantic Browsing / Alexander Faaborg, Carl Lagoze Annotation and Recommendation Automatic Classification and Indexing - Cross-Lingual Text Categorization / Nuria Bel, Cornelis H.A. Koster, Marta Villegas - Automatic Multi-label Subject Indexing in a Multilingual Environment / Boris Lauser, Andreas Hotho Web Technologies Topical Crawling, Subject Gateways - VASCODA: A German Scientific Portal for Cross-Searching Distributed Digital Resource Collections / Heike Neuroth, Tamara Pianos Architectures and Systems Knowledge Organization: Concepts - The ADEPT Concept-Based Digital Learning Environment / T.R. Smith, D. Ancona, O. Buchel, M. Freeston, W. Heller, R. Nottrott, T. Tierney, A. Ushakov - A User Evaluation of Hierarchical Phrase Browsing / Katrina D. Edgar, David M. Nichols, Gordon W. Paynter, Kirsten Thomson, Ian H. Witten - Visual Semantic Modeling of Digital Libraries / Qinwei Zhu, Marcos Andre Gongalves, Rao Shen, Lillian Cassell, Edward A. Fox Collection Building and Management Knowledge Organization: Authorities and Works - Automatic Conversion from MARC to FRBR / Christian Monch, Trond Aalberg Information Retrieval in Different Application Areas Digital Preservation Indexing and Searching of Special Document and Collection Information
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.02470427 = product of:
      0.07411281 = sum of:
        0.07411281 = sum of:
          0.044964164 = weight(_text_:indexing in 150) [ClassicSimilarity], result of:
            0.044964164 = score(doc=150,freq=10.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.23642151 = fieldWeight in 150, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
          0.029148644 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.029148644 = score(doc=150,freq=6.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
      0.33333334 = coord(1/3)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  4. Chu, H.: Information representation and retrieval in the digital age (2010) 0.01
    0.011954757 = product of:
      0.03586427 = sum of:
        0.03586427 = weight(_text_:systematic in 92) [ClassicSimilarity], result of:
          0.03586427 = score(doc=92,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.1262953 = fieldWeight in 92, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
  5. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.011954757 = product of:
      0.03586427 = sum of:
        0.03586427 = weight(_text_:systematic in 2924) [ClassicSimilarity], result of:
          0.03586427 = score(doc=2924,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.1262953 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.33333334 = coord(1/3)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.
  6. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.01
    0.011375135 = product of:
      0.034125403 = sum of:
        0.034125403 = product of:
          0.068250805 = sum of:
            0.068250805 = weight(_text_:indexing in 5777) [ClassicSimilarity], result of:
              0.068250805 = score(doc=5777,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3588626 = fieldWeight in 5777, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5777)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This book discusses many of the key design issues for building search engines and emphazises the important role that applied mathematics can play in improving information retrieval. The authors discuss not only important data structures, algorithms, and software but also user-centered issues such as interfaces, manual indexing, and document preparation. They also present some of the current problems in information retrieval that many not be familiar to applied mathematicians and computer scientists and some of the driving computational methods (SVD, SDD) for automated conceptual indexing
  7. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.01
    0.01072458 = product of:
      0.032173738 = sum of:
        0.032173738 = product of:
          0.064347476 = sum of:
            0.064347476 = weight(_text_:indexing in 7) [ClassicSimilarity], result of:
              0.064347476 = score(doc=7,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3383389 = fieldWeight in 7, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
    Content
    Inhalt: Introduction Document File Preparation - Manual Indexing - Information Extraction - Vector Space Modeling - Matrix Decompositions - Query Representations - Ranking and Relevance Feedback - Searching by Link Structure - User Interface - Book Format Document File Preparation Document Purification and Analysis - Text Formatting - Validation - Manual Indexing - Automatic Indexing - Item Normalization - Inverted File Structures - Document File - Dictionary List - Inversion List - Other File Structures Vector Space Models Construction - Term-by-Document Matrices - Simple Query Matching - Design Issues - Term Weighting - Sparse Matrix Storage - Low-Rank Approximations Matrix Decompositions QR Factorization - Singular Value Decomposition - Low-Rank Approximations - Query Matching - Software - Semidiscrete Decomposition - Updating Techniques Query Management Query Binding - Types of Queries - Boolean Queries - Natural Language Queries - Thesaurus Queries - Fuzzy Queries - Term Searches - Probabilistic Queries Ranking and Relevance Feedback Performance Evaluation - Precision - Recall - Average Precision - Genetic Algorithms - Relevance Feedback Searching by Link Structure HITS Method - HITS Implementation - HITS Summary - PageRank Method - PageRank Adjustments - PageRank Implementation - PageRank Summary User Interface Considerations General Guidelines - Search Engine Interfaces - Form Fill-in - Display Considerations - Progress Indication - No Penalties for Error - Results - Test and Retest - Final Considerations Further Reading
  8. ¬The history and heritage of scientific and technological information systems : Proceedings of the 2002 Conference (2004) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 5897) [ClassicSimilarity], result of:
              0.048260607 = score(doc=5897,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 5897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5897)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Enthält u.a. die Beiträge: Fugmann, R.: Learning the lessons of the past; Davis, C.H.: Indexing and index editing at Chemical Abstracts before the Registry System; Roe , E.M.: Abstracts and indexes to branded full text: what's in a name?; Lynch, M.F.: Introduction of computers in chemical structure information systems, or what is not recorded in the annals; Baatz, S.: Medical science and medical informatics: The visible human project, 1986-2000.
  9. Research and advanced technology for digital libraries : 8th European conference, ECDL 2004, Bath, UK, September 12-17, 2004 : proceedings (2004) 0.01
    0.0075834226 = product of:
      0.022750268 = sum of:
        0.022750268 = product of:
          0.045500536 = sum of:
            0.045500536 = weight(_text_:indexing in 2427) [ClassicSimilarity], result of:
              0.045500536 = score(doc=2427,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23924173 = fieldWeight in 2427, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2427)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2004, held in Bath, UK in September 2004. The 47 revised full papers presented were carefully reviewed and selected from a total of 148 submissions. The papers are organized in topical sections on digital library architectures, evaluation and usability, user interfaces and presentation, new approaches to information retrieval, interoperability, enhanced indexing and search methods, personalization and applications, music digital libraries, personal digital libraries, innovative technologies, open archive initiative, new models and tools, and user-centered design.
    Content
    Inhalt: Digital Library Architectures Evaluation and Usability User Interfaces and Presentation New Approaches to Information Retrieval - From Abstract to Virtual Entities: Implementation of Work-Based Searching in a Multimedia Digital Library / Mark Notess, Jenn Riley, and Harriette Hemmasi Interoperability Enhanced Indexing and Searching Methods Personalisation and Annotation Music Digital Libraries Personal Digital Libraries Innovative Technologies for Digital Libraries Open Archives Initiative New Models and Tools User-Centred Design - Evaluating Strategic Support for Information Access in the DAFFODIL System / Claus-Peter Klas, Norbert Fuhr, and Andre Schaefer Innovative Technologies for Digital Libraries
  10. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.04038954 = score(doc=1781,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:35:21
  11. Weller, K.: Knowledge representation in the Social Semantic Web (2010) 0.01
    0.0066354945 = product of:
      0.019906484 = sum of:
        0.019906484 = product of:
          0.039812967 = sum of:
            0.039812967 = weight(_text_:indexing in 4515) [ClassicSimilarity], result of:
              0.039812967 = score(doc=4515,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.20933652 = fieldWeight in 4515, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4515)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. This book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. This title combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
  12. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.005609659 = product of:
      0.016828977 = sum of:
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.033657953 = score(doc=1397,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:29:25
  13. Bleuel, J.: Online Publizieren im Internet : elektronische Zeitschriften und Bücher (1995) 0.01
    0.005609659 = product of:
      0.016828977 = sum of:
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 1708) [ClassicSimilarity], result of:
              0.033657953 = score(doc=1708,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 1708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1708)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 16:15:37
  14. Research and advanced technology for digital libraries : 9th European conference, ECDL 2005, Vienna, Austria, September 18 - 23, 2005 ; proceedings (2005) 0.01
    0.00536229 = product of:
      0.016086869 = sum of:
        0.016086869 = product of:
          0.032173738 = sum of:
            0.032173738 = weight(_text_:indexing in 2423) [ClassicSimilarity], result of:
              0.032173738 = score(doc=2423,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.16916946 = fieldWeight in 2423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2423)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt u.a.: - Digital Library Models and Architectures - Multimedia and Hypermedia Digital Libraries - XML - Building Digital Libraries - User Studies - Digital Preservation - Metadata - Digital Libraries and e-Learning - Text Classification in Digital Libraries - Searching - - Focused Crawling Using Latent Semantic Indexing - An Application for Vertical Search Engines / George Almpanidis, Constantine Kotropoulos, Ioannis Pitas - - Active Support for Query Formulation in Virtual Digital Libraries: A Case Study with DAFFODIL / Andre Schaefer, Matthias Jordan, Claus-Peter Klas, Norbert Fuhr - - Expression of Z39.50 Supported Search Capabilities by Applying Formal Descriptions / Michalis Sfakakis, Sarantos Kapidakis - Text Digital Libraries
  15. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.00536229 = product of:
      0.016086869 = sum of:
        0.016086869 = product of:
          0.032173738 = sum of:
            0.032173738 = weight(_text_:indexing in 3346) [ClassicSimilarity], result of:
              0.032173738 = score(doc=3346,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.16916946 = fieldWeight in 3346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  16. Medienkompetenz : wie lehrt und lernt man Medienkompetenz? (2003) 0.00
    0.0044877273 = product of:
      0.013463181 = sum of:
        0.013463181 = product of:
          0.026926363 = sum of:
            0.026926363 = weight(_text_:22 in 2249) [ClassicSimilarity], result of:
              0.026926363 = score(doc=2249,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.15476047 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 18:05:16
  17. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.00
    0.0044877273 = product of:
      0.013463181 = sum of:
        0.013463181 = product of:
          0.026926363 = sum of:
            0.026926363 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.026926363 = score(doc=2428,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  18. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.0040217172 = product of:
      0.012065152 = sum of:
        0.012065152 = product of:
          0.024130303 = sum of:
            0.024130303 = weight(_text_:indexing in 6) [ClassicSimilarity], result of:
              0.024130303 = score(doc=6,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.1268771 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
  19. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0022438637 = product of:
      0.0067315907 = sum of:
        0.0067315907 = product of:
          0.013463181 = sum of:
            0.013463181 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.013463181 = score(doc=1789,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    23. 3.2008 19:10:22

Languages

  • e 16
  • d 3

Types

  • m 19
  • s 9

Subjects

Classifications