Search (47 results, page 1 of 3)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Floridi, L.: Philosophy and computing : an introduction (1999) 0.08
    0.08036062 = product of:
      0.20090155 = sum of:
        0.18597113 = weight(_text_:philosophy in 823) [ClassicSimilarity], result of:
          0.18597113 = score(doc=823,freq=14.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.80664045 = fieldWeight in 823, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=823)
        0.014930432 = weight(_text_:of in 823) [ClassicSimilarity], result of:
          0.014930432 = score(doc=823,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.22855641 = fieldWeight in 823, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=823)
      0.4 = coord(2/5)
    
    Abstract
    Philosophy and Computing explores each of the following areas of technology: the digital revolution; the computer; the Internet and the Web; CD-ROMs and Mulitmedia; databases, textbases, and hypertexts; Artificial Intelligence; the future of computing. Luciano Floridi shows us how the relationship between philosophy and computing provokes a wide range of philosophical questions: is there a philosophy of information? What can be achieved by a classic computer? How can we define complexity? What are the limits of quantam computers? Is the Internet an intellectual space or a polluted environment? What is the paradox in the Strong Artificial Intlligence program? Philosophy and Computing is essential reading for anyone wishing to fully understand both the development and history of information and communication technology as well as the philosophical issues it ultimately raises. 'The most careful and scholarly book to be written on castles in a generation.'
    LCSH
    Computer science / Philosophy
    Subject
    Computer science / Philosophy
  2. Hars, A.: From publishing to knowledge networks : reinventing online knowledge infrastructures (2003) 0.05
    0.04529146 = product of:
      0.11322865 = sum of:
        0.09940575 = weight(_text_:philosophy in 1634) [ClassicSimilarity], result of:
          0.09940575 = score(doc=1634,freq=4.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.43116745 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1634)
        0.013822895 = weight(_text_:of in 1634) [ClassicSimilarity], result of:
          0.013822895 = score(doc=1634,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.21160212 = fieldWeight in 1634, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1634)
      0.4 = coord(2/5)
    
    Abstract
    Today's publishing infrastructure is rapidly changing. As electronic journals, digital libraries, collaboratories, logic servers, and other knowledge infrastructures emerge an the internet, the key aspects of this transformation need to be identified. Knowledge is becoming increasingly dynamic and integrated. Instead of writing self-contained articles, authors are turning to the new practice of embedding their findings into dynamic networks of knowledge. Here, the author details the implications that this transformation is having an the creation, dissemination and organization of academic knowledge. The author Shows that many established publishing principles need to be given up in order to facilitate this transformation. The text provides valuable insights for knowledge managers, designers of internet-based knowledge infrastructures, and professionals in the publishing industry. Researchers will find the scenarios and implications for research processes stimulating and thought-provoking.
    LCSH
    Science / Philosophy
    Subject
    Science / Philosophy
  3. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.04
    0.03760769 = product of:
      0.06267948 = sum of:
        0.004989027 = product of:
          0.024945134 = sum of:
            0.024945134 = weight(_text_:problem in 6) [ClassicSimilarity], result of:
              0.024945134 = score(doc=6,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.14068612 = fieldWeight in 6, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.2 = coord(1/5)
        0.042174287 = weight(_text_:philosophy in 6) [ClassicSimilarity], result of:
          0.042174287 = score(doc=6,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.18292886 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
        0.01551616 = weight(_text_:of in 6) [ClassicSimilarity], result of:
          0.01551616 = score(doc=6,freq=42.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23752278 = fieldWeight in 6, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.6 = coord(3/5)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Chapter 9. Accelerating the Computation of PageRank: 9.1 An Adaptive Power Method - 9.2 Extrapolation - 9.3 Aggregation - 9.4 Other Numerical Methods Chapter 10. Updating the PageRank Vector: 10.1 The Two Updating Problems and their History - 10.2 Restarting the Power Method - 10.3 Approximate Updating Using Approximate Aggregation - 10.4 Exact Aggregation - 10.5 Exact vs. Approximate Aggregation - 10.6 Updating with Iterative Aggregation - 10.7 Determining the Partition - 10.8 Conclusions Chapter 11. The HITS Method for Ranking Webpages: 11.1 The HITS Algorithm - 11.2 HITS Implementation - 11.3 HITS Convergence - 11.4 HITS Example - 11.5 Strengths and Weaknesses of HITS - 11.6 HITS's Relationship to Bibliometrics - 11.7 Query-Independent HITS - 11.8 Accelerating HITS - 11.9 HITS Sensitivity Chapter 12. Other Link Methods for Ranking Webpages: 12.1 SALSA - 12.2 Hybrid Ranking Methods - 12.3 Rankings based on Traffic Flow Chapter 13. The Future of Web Information Retrieval: 13.1 Spam - 13.2 Personalization - 13.3 Clustering - 13.4 Intelligent Agents - 13.5 Trends and Time-Sensitive Search - 13.6 Privacy and Censorship - 13.7 Library Classification Schemes - 13.8 Data Fusion Chapter 14. Resources for Web Information Retrieval: 14.1 Resources for Getting Started - 14.2 Resources for Serious Study Chapter 15. The Mathematics Guide: 15.1 Linear Algebra - 15.2 Perron-Frobenius Theory - 15.3 Markov Chains - 15.4 Perron Complementation - 15.5 Stochastic Complementation - 15.6 Censoring - 15.7 Aggregation - 15.8 Disaggregation
  4. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.01957214 = product of:
      0.032620233 = sum of:
        0.004157522 = product of:
          0.020787612 = sum of:
            0.020787612 = weight(_text_:problem in 150) [ClassicSimilarity], result of:
              0.020787612 = score(doc=150,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.11723843 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.2 = coord(1/5)
        0.01620878 = weight(_text_:of in 150) [ClassicSimilarity], result of:
          0.01620878 = score(doc=150,freq=66.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2481255 = fieldWeight in 150, product of:
              8.124039 = tf(freq=66.0), with freq of:
                66.0 = termFreq=66.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.012253928 = product of:
          0.024507856 = sum of:
            0.024507856 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.024507856 = score(doc=150,freq=6.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  5. Levy, S.: In the plex : how Google thinks, works, and shapes our lives (2011) 0.02
    0.016456759 = product of:
      0.041141897 = sum of:
        0.009676025 = weight(_text_:of in 9) [ClassicSimilarity], result of:
          0.009676025 = score(doc=9,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.14812148 = fieldWeight in 9, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=9)
        0.031465873 = product of:
          0.062931746 = sum of:
            0.062931746 = weight(_text_:mind in 9) [ClassicSimilarity], result of:
              0.062931746 = score(doc=9,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.24136074 = fieldWeight in 9, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=9)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Few companies in history have ever been as successful and as admired as Google, the company that has transformed the Internet and become an indispensable part of our lives. How has Google done it? Veteran technology reporter Steven Levy was granted unprecedented access to the company, and in this revelatory book he takes readers inside Google headquarters-the Googleplex-to show how Google works. While they were still students at Stanford, Google cofounders Larry Page and Sergey Brin revolutionized Internet search. They followed this brilliant innovation with another, as two of Google's earliest employees found a way to do what no one else had: make billions of dollars from Internet advertising. With this cash cow (until Google's IPO nobody other than Google management had any idea how lucrative the company's ad business was), Google was able to expand dramatically and take on other transformative projects: more efficient data centers, open-source cell phones, free Internet video (YouTube), cloud computing, digitizing books, and much more. The key to Google's success in all these businesses, Levy reveals, is its engineering mind-set and adoption of such Internet values as speed, openness, experimentation, and risk taking. After its unapologetically elitist approach to hiring, Google pampers its engineers-free food and dry cleaning, on-site doctors and masseuses-and gives them all the resources they need to succeed. Even today, with a workforce of more than 23,000, Larry Page signs off on every hire. But has Google lost its innovative edge? It stumbled badly in China-Levy discloses what went wrong and how Brin disagreed with his peers on the China strategy-and now with its newest initiative, social networking, Google is chasing a successful competitor for the first time. Some employees are leaving the company for smaller, nimbler start-ups. Can the company that famously decided not to be evil still compete? No other book has ever turned Google inside out as Levy does with In the Plex.
    Content
    The world according to Google: biography of a search engine -- Googlenomics: cracking the code on internet profits -- Don't be evil: how Google built its culture -- Google's cloud: how Google built data centers and killed the hard drive -- Outside the box: the Google phone company. and the Google t.v. company -- Guge: Google moral dilemma in China -- Google.gov: is what's good for Google, good for government or the public? -- Epilogue: chasing tail lights: trying to crack the social code.
  6. Chu, H.: Information representation and retrieval in the digital age (2010) 0.01
    0.01400202 = product of:
      0.035005048 = sum of:
        0.009576782 = weight(_text_:of in 92) [ClassicSimilarity], result of:
          0.009576782 = score(doc=92,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.14660224 = fieldWeight in 92, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=92)
        0.025428267 = product of:
          0.050856534 = sum of:
            0.050856534 = weight(_text_:mind in 92) [ClassicSimilarity], result of:
              0.050856534 = score(doc=92,freq=4.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19504894 = fieldWeight in 92, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.015625 = fieldNorm(doc=92)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Information representation and retrieval : an overview -- Information representation I : basic approaches -- Information representation II : related topics -- Language in information representation and retrieval -- Retrieval techniques and query representation -- Retrieval approaches -- Information retrieval models -- Information retrieval systems -- Retrieval of information unique in content or format -- The user dimension in information representation and retrieval -- Evaluation of information representation and retrieval -- Artificial intelligence in information representation and retrieval.
    Footnote
    Rez. in: JASIST 56(2005) no.2, S.215-216 (A. Heath): "What is small, thoroughly organized, and easy to understand? Well, it's Heting Chu's latest book an information retrieval. A very welcome release, this small literary addition to the field (only 248 pages) contains a concise and weIl-organized discussion of every major topic in information retrieval. The often-complex field of information retrieval is presented from its origin in the early 1950s to the present day. The organization of this text is top-notch, thus making this an easy read for even the novice. Unlike other titles in this area, Chu's user-friendly style of writing is done an purpose to properly introduce newcomers to the field in a less intimidating way. As stated by the author in the Preface, the purpose of the book is to "present a systematic, thorough yet nontechnical view of the field by using plain language to explain complex subjects." Chu has definitely struck up the right combination of ingredients. In a field so broad and complex, a well-organized presentation of topics that don't trip an themselves is essential. The use of plain language where possible is also a good choice for this topic because it allows one to absorb topics that are, by nature, not as easy to grasp. For instance, Chapters 6 and 7, which cover retrieval approaches and techniques, an often painstaking topic for many students and teachers is deftly handled with the use of tables that can be used to compare and contrast the various models discussed. I particularly loved Chu's use of Koll's 2000 article from the Bulletin of the American Society for Information Science to explain subject searching at the beginning of Chapter 6, which discusses the differences between browsing and searching. The Koll article uses the task of finding a needle in a haystack as an analogy.
    Chu's intent with this book is clear throughout the entire text. With this presentation, she writes with the novice in mind or as she puls it in the Preface, "to anyone who is interested in learning about the field, particularly those who are new to it." After reading the text, I found that this book is also an appropriate reference book for those who are somewhat advanced in the field. I found the chapters an information retrieval models and techniques, metadata, and AI very informative in that they contain information that is often rather densely presented in other texts. Although, I must say, the metadata section in Chapter 3 is pretty basic and contains more questions about the area than information. . . . It is an excellent book to have in the classroom, an your bookshelf, etc. It reads very well and is written with the reader in mind. If you are in need of a more advanced or technical text an the subject, this is not the book for you. But, if you are looking for a comprehensive, manual that can be used as a "flip-through," then you are in luck."
    Weitere Rez. in: Rez. in: nfd 55(2004) H.4, S.252 (D. Lewandowski):"Die Zahl der Bücher zum Thema Information Retrieval ist nicht gering, auch in deutscher Sprache liegen einige Titel vor. Trotzdem soll ein neues (englischsprachiges) Buch zu diesem Thema hier besprochen werden. Dieses zeichnet sich durch eine Kürze (nur etwa 230 Seiten Text) und seine gute Verständlichkeit aus und richtet sich damit bevorzugt an Studenten in den ersten Semestern. Heting Chu unterrichtet seit 1994 an Palmer School of Library and Information Science der Long Island University New York. Dass die Autorin viel Erfahrung in der Vermittlung des Stoffs in ihren Information-Retrieval-Veranstaltungen sammeln konnte, merkt man dem Buch deutlich an. Es ist einer klaren und verständlichen Sprache geschrieben und führt in die Grundlagen der Wissensrepräsentation und des Information Retrieval ein. Das Lehrbuch behandelt diese Themen als Gesamtkomplex und geht damit über den Themenbereich ähnlicher Bücher hinaus, die sich in der Regel auf das Retrieval beschränken. Das Buch ist in zwölf Kapitel gegliedert, wobei das erste Kapitel eine Übersicht über die zu behandelnden Themen gibt und den Leser auf einfache Weise in die Grundbegriffe und die Geschichte des IRR einführt. Neben einer kurzen chronologischen Darstellung der Entwicklung der IRR-Systeme werden auch vier Pioniere des Gebiets gewürdigt: Mortimer Taube, Hans Peter Luhn, Calvin N. Mooers und Gerard Salton. Dies verleiht dem von Studenten doch manchmal als trocken empfundenen Stoff eine menschliche Dimension. Das zweite und dritte Kapitel widmen sich der Wissensrepräsentation, wobei zuerst die grundlegenden Ansätze wie Indexierung, Klassifikation und Abstracting besprochen werden. Darauf folgt die Behandlung von Wissensrepräsentation mittels Metadaten, wobei v.a. neuere Ansätze wie Dublin Core und RDF behandelt werden. Weitere Unterkapitel widmen sich der Repräsentation von Volltexten und von Multimedia-Informationen. Die Stellung der Sprache im IRR wird in einem eigenen Kapitel behandelt. Dabei werden in knapper Form verschiedene Formen des kontrollierten Vokabulars und die wesentlichen Unterscheidungsmerkmale zur natürlichen Sprache erläutert. Die Eignung der beiden Repräsentationsmöglichkeiten für unterschiedliche IRR-Zwecke wird unter verschiedenen Aspekten diskutiert.
  7. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.012797958 = product of:
      0.031994894 = sum of:
        0.017845279 = weight(_text_:of in 1397) [ClassicSimilarity], result of:
          0.017845279 = score(doc=1397,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 1397, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.028299233 = score(doc=1397,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Content
    From the contents:.- Basics of screen design.- Navigation and orientation.- Information.- Screen layout.Interaction.- Motivation.- Innovative prospects.- Appendix.Glossary.- Literature.- Index
    Date
    22. 3.2008 14:29:25
  8. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.011321201 = product of:
      0.028303001 = sum of:
        0.0066520358 = product of:
          0.033260178 = sum of:
            0.033260178 = weight(_text_:problem in 3346) [ClassicSimilarity], result of:
              0.033260178 = score(doc=3346,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 3346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3346)
          0.2 = coord(1/5)
        0.021650964 = weight(_text_:of in 3346) [ClassicSimilarity], result of:
          0.021650964 = score(doc=3346,freq=46.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.33143494 = fieldWeight in 3346, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  9. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.010083349 = product of:
      0.025208373 = sum of:
        0.019548526 = weight(_text_:of in 1789) [ClassicSimilarity], result of:
          0.019548526 = score(doc=1789,freq=150.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 1789, product of:
              12.247449 = tf(freq=150.0), with freq of:
                150.0 = termFreq=150.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.0056598466 = product of:
          0.011319693 = sum of:
            0.011319693 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.011319693 = score(doc=1789,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  10. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.01
    0.00947094 = product of:
      0.02367735 = sum of:
        0.0057608327 = product of:
          0.028804163 = sum of:
            0.028804163 = weight(_text_:problem in 1796) [ClassicSimilarity], result of:
              0.028804163 = score(doc=1796,freq=6.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.16245036 = fieldWeight in 1796, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.2 = coord(1/5)
        0.017916517 = weight(_text_:of in 1796) [ClassicSimilarity], result of:
          0.017916517 = score(doc=1796,freq=126.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27426767 = fieldWeight in 1796, product of:
              11.224972 = tf(freq=126.0), with freq of:
                126.0 = termFreq=126.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.4 = coord(2/5)
    
    Abstract
    In this collection of nearly 50 articles written by librarians, computer specialists, and other information professionals, the reader finds 10 chapters, each devoted to a problem or a side effect that has emerged since the introduction of the Internet: control over selection, survival of the book, training users, adapting to users' expectations, access issues, cost of technology, continuous retraining, legal issues, disappearing data, and how to avoid becoming blind sided. After stating a problem, each chapter offers solutions that are subsequently supported by articles. The editor's comments, which appear throughout the text, are an added bonus, as are the sections concluding the book, among them a listing of useful URLs, a works-cited section, and a comprehensive index. This book has much to recommend it, especially the articles, which are not only informative, thought-provoking, and interesting but highly readable and accessible as well. An indispensable tool for all librarians.
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
    Some of the pieces are more captivating than others and less "how-to" in nature, providing contextual discussions as well as pragmatic advice. For example, Darlene Fichter's "Blogging Your Life Away" is an interesting discussion about creating and maintaining blogs. (For those unfamiliar with the term, blogs are frequently updated Web pages that ]ist thematically tied annotated links or lists, such as a blog of "Great Websites of the Week" or of "Fun Things to Do This Month in Patterson, New Jersey.") Fichter's article includes descriptions of sample blogs and a comparison of commercially available blog creation software. Another article of note is Kelly Broughton's detailed account of her library's experiences in initiating Web-based reference in an academic library. "Our Experiment in Online Real-Time Reference" details the decisions and issues that the Jerome Library staff at Bowling Green State University faced in setting up a chat reference service. It might be useful to those finding themselves in the same situation. This volume is at its best when it eschews pragmatic information and delves into the deeper, less ephemeral libraryrelated issues created by the rise of the Internet and of the Web. One of the most thought-provoking topics covered is the issue of "the serials pricing crisis," or the increase in subscription prices to journals that publish scholarly work. The pros and cons of moving toward a more free-access Web-based system for the dissemination of peer-reviewed material and of using university Web sites to house scholars' other works are discussed. However, deeper discussions such as these are few, leaving the volume subject to rapid aging, and leaving it with an audience limited to librarians looking for fast technological fixes."
  11. Survey of text mining : clustering, classification, and retrieval (2004) 0.01
    0.008855176 = product of:
      0.02213794 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 804) [ClassicSimilarity], result of:
              0.041575223 = score(doc=804,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.2 = coord(1/5)
        0.013822895 = weight(_text_:of in 804) [ClassicSimilarity], result of:
          0.013822895 = score(doc=804,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.21160212 = fieldWeight in 804, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.4 = coord(2/5)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  12. White, R.W.; Roth, R.A.: Exploratory search : beyond the query-response paradigm (2009) 0.01
    0.008855176 = product of:
      0.02213794 = sum of:
        0.008315044 = product of:
          0.041575223 = sum of:
            0.041575223 = weight(_text_:problem in 0) [ClassicSimilarity], result of:
              0.041575223 = score(doc=0,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23447686 = fieldWeight in 0, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=0)
          0.2 = coord(1/5)
        0.013822895 = weight(_text_:of in 0) [ClassicSimilarity], result of:
          0.013822895 = score(doc=0,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.21160212 = fieldWeight in 0, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=0)
      0.4 = coord(2/5)
    
    Abstract
    As information becomes more ubiquitous and the demands that searchers have on search systems grow, there is a need to support search behaviors beyond simple lookup. Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Exploratory search describes an information-seeking problem context that is open-ended, persistent, and multifaceted, and information-seeking processes that are opportunistic, iterative, and multitactical. Exploratory searchers aim to solve complex problems and develop enhanced mental capacities. Exploratory search systems support this through symbiotic human-machine relationships that provide guidance in exploring unfamiliar information landscapes. Exploratory search has gained prominence in recent years. There is an increased interest from the information retrieval, information science, and human-computer interaction communities in moving beyond the traditional turn-taking interaction model supported by major Web search engines, and toward support for human intelligence amplification and information use. In this lecture, we introduce exploratory search, relate it to relevant extant research, outline the features of exploratory search systems, discuss the evaluation of these systems, and suggest some future directions for supporting exploratory search. Exploratory search is a new frontier in the search domain and is becoming increasingly important in shaping our future world.
    Content
    Table of Contents: Introduction / Defining Exploratory Search / Related Work / Features of Exploratory Search Systems / Evaluation of Exploratory Search Systems / Future Directions and concluding Remarks
  13. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.01
    0.008565803 = product of:
      0.021414507 = sum of:
        0.010094815 = weight(_text_:of in 2426) [ClassicSimilarity], result of:
          0.010094815 = score(doc=2426,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.15453234 = fieldWeight in 2426, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2426)
        0.011319693 = product of:
          0.022639386 = sum of:
            0.022639386 = weight(_text_:22 in 2426) [ClassicSimilarity], result of:
              0.022639386 = score(doc=2426,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.15476047 = fieldWeight in 2426, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2426)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This book constitutes the refereed proceedings of the 7th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2003, held in Trondheim, Norway in August 2003. The 39 revised full papers and 8 revised short papers presented were carefully reviewed and selected from 161 submissions. The papers are organized in topical sections on uses, users, and user interfaces; metadata applications; annotation and recommendation; automatic classification and indexing; Web technologies; topical crawling and subject gateways; architectures and systems; knowledge organization; collection building and management; information retrieval; digital preservation; and indexing and searching of special documents and collection information.
    Content
    Inhalt: Uses, Users, and User Interaction Metadata Applications - Semantic Browsing / Alexander Faaborg, Carl Lagoze Annotation and Recommendation Automatic Classification and Indexing - Cross-Lingual Text Categorization / Nuria Bel, Cornelis H.A. Koster, Marta Villegas - Automatic Multi-label Subject Indexing in a Multilingual Environment / Boris Lauser, Andreas Hotho Web Technologies Topical Crawling, Subject Gateways - VASCODA: A German Scientific Portal for Cross-Searching Distributed Digital Resource Collections / Heike Neuroth, Tamara Pianos Architectures and Systems Knowledge Organization: Concepts - The ADEPT Concept-Based Digital Learning Environment / T.R. Smith, D. Ancona, O. Buchel, M. Freeston, W. Heller, R. Nottrott, T. Tierney, A. Ushakov - A User Evaluation of Hierarchical Phrase Browsing / Katrina D. Edgar, David M. Nichols, Gordon W. Paynter, Kirsten Thomson, Ian H. Witten - Visual Semantic Modeling of Digital Libraries / Qinwei Zhu, Marcos Andre Gongalves, Rao Shen, Lillian Cassell, Edward A. Fox Collection Building and Management Knowledge Organization: Authorities and Works - Automatic Conversion from MARC to FRBR / Christian Monch, Trond Aalberg Information Retrieval in Different Application Areas Digital Preservation Indexing and Searching of Special Document and Collection Information
  14. Medienkompetenz : wie lehrt und lernt man Medienkompetenz? (2003) 0.01
    0.0082908375 = product of:
      0.020727094 = sum of:
        0.0094074 = product of:
          0.047036998 = sum of:
            0.047036998 = weight(_text_:problem in 2249) [ClassicSimilarity], result of:
              0.047036998 = score(doc=2249,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2652803 = fieldWeight in 2249, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.2 = coord(1/5)
        0.011319693 = product of:
          0.022639386 = sum of:
            0.022639386 = weight(_text_:22 in 2249) [ClassicSimilarity], result of:
              0.022639386 = score(doc=2249,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.15476047 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Die Vermittlung von Informationskompetenz wird künftig zu den Kernaufgaben der Bibliotheken gehören müssen. Dies ist auch eines der Hauptarbeitsfelder des Rezensenten, der aus eigener Praxis sieht, welche Schwierigkeiten sich hierbei finden: Viele Klienten erkennen ihren eigenen Informationsbedarf nicht, können ein fachliches nicht von einem informatorischen Problem trennen, sind nicht in der Lage, für ihr spezifisches Problem potenzielle Informationsquellen zu finden und haben vor allem Probleme, die Verbindung zwischen elektronischer und gedruckter WeIt zu schaffen, die somit praktisch unverbunden nebeneinander existieren (vgl. Rainer Strzolka: Vermittlung von Informationskompetenz als Informationsdienstleistung? Vortrag, FH Köln, Fakultät für Informations- und Kommunikationswissenschaften, Institut für Informationswissenschaft, am 31. Oktober 2003). Der Brückenschlag zwischen diesen beiden Welten gehört zu den Aufgaben professioneller Informationsvermittler, die nicht nur in der Digitalwelt firm sein müssen, aber auch dort. Nicht zuletzt müssen die gefundenen Informationen ergebnisorientiert genutzt und kritisch bewertet werden und die gefundenen Antworten zur Problemlösung eingesetzt werden. Die Informationslandschaft ist mit ihren verschiedenen Wissensmarktplätzen und Informationsräumen inzwischen so komplex geworden, dass eine kleine Handreichung dazu geeignet erscheint, vor allem die eigene Position des Informationsvermittlers zu überdenken. Zudem ist die aktive Informationsvermittlung noch ein dürres Feld in Deutschland. Die vorliegende kleine Erfahrungsstudie schickt sich an, dies zu ändern. Der Ansatz geht davon aus, dass jeder Vermittler von Medienkompetenz Lehrer wie Lernender zugleich ist; die Anlage ist wie bei allen BibSpider-Publikationen international ausgerichtet. Der Band ist komplett zweisprachig und versammelt Erfahrungsberichte aus der Bundesrepublik Deutschland, den USA und Südafrika, die eher als Ansatz zur Bewusstseinsbildung denn als Arbeitsanleitung gedacht sind. Eingeleitet wird der Band von einer terminologischen Herleitung des Begriffs aus dem Angelsächsischen und den verschiedenen Bedeutungsebenen, die durch unterschiedliche Bildungs- und Informationskulturen bedingt sind. Angerissen werden verschiedene Arbeitsgebiete und -erfahrungen.
    Date
    22. 3.2008 18:05:16
  15. Research and advanced technology for digital libraries : 10th European conference ; proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006 ; proceedings (2006) 0.01
    0.008139508 = product of:
      0.020348769 = sum of:
        0.009029076 = weight(_text_:of in 2428) [ClassicSimilarity], result of:
          0.009029076 = score(doc=2428,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.13821793 = fieldWeight in 2428, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2428)
        0.011319693 = product of:
          0.022639386 = sum of:
            0.022639386 = weight(_text_:22 in 2428) [ClassicSimilarity], result of:
              0.022639386 = score(doc=2428,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.15476047 = fieldWeight in 2428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2428)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2006, held in Alicante, Spain in September 2006. The 36 revised full papers presented together with the extended abstracts of 18 demo papers and 15 revised poster papers were carefully reviewed and selected from a total of 159 submissions. The papers are organized in topical sections on architectures, preservation, retrieval, applications, methodology, metadata, evaluation, user studies, modeling, audiovisual content, and language technologies.
    Content
    Inhalt u.a.: Architectures I Preservation Retrieval - The Use of Summaries in XML Retrieval / Zoltdn Szldvik, Anastasios Tombros, Mounia Laimas - An Enhanced Search Interface for Information Discovery from Digital Libraries / Georgia Koutrika, Alkis Simitsis - The TIP/Greenstone Bridge: A Service for Mobile Location-Based Access to Digital Libraries / Annika Hinze, Xin Gao, David Bainbridge Architectures II Applications Methodology Metadata Evaluation User Studies Modeling Audiovisual Content Language Technologies - Incorporating Cross-Document Relationships Between Sentences for Single Document Summarizations / Xiaojun Wan, Jianwu Yang, Jianguo Xiao - Semantic Web Techniques for Multiple Views on Heterogeneous Collections: A Case Study / Marjolein van Gendt, Antoine Isaac, Lourens van der Meij, Stefan Schlobach Posters - A Tool for Converting from MARC to FRBR / Trond Aalberg, Frank Berg Haugen, Ole Husby
  16. Context: nature, impact, and role : 5th International Conference on Conceptions of Library and Information Science, CoLIS 2005, Glasgow 2005; Proceedings (2005) 0.00
    0.0041867127 = product of:
      0.010466781 = sum of:
        0.004157522 = product of:
          0.020787612 = sum of:
            0.020787612 = weight(_text_:problem in 42) [ClassicSimilarity], result of:
              0.020787612 = score(doc=42,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.11723843 = fieldWeight in 42, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=42)
          0.2 = coord(1/5)
        0.006309259 = weight(_text_:of in 42) [ClassicSimilarity], result of:
          0.006309259 = score(doc=42,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.09658271 = fieldWeight in 42, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=42)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: Mitt. VÖB 59(2006) H.3, S.100-103 (O. Oberhauser): "Dieses als Band 3507 der bekannten, seit 1973 erscheinenden Springer-Serie Lecture Notes in Computer Science (LNCS) publizierte Buch versammelt die Vorträge der 5. Tagung "Conceptions of Library and Information Science". CoLIS hat sich in den letzten anderthalb Jahrzehnten als internationales Forum für die Präsentation und Rezeption von Forschung auf den Fachgebieten Informatik und Informationswissenschaft etabliert. Auf die 1992 in Tampere (Finnland) anlässlich des damals 20jährigen Bestehens des dortigen Instituts für Informationswissenschaft abgehaltene erste Tagung folgten weitere in Kopenhagen (1996), Dubrovnik (1999) und Seattle, WA (2002). Die zuletzt an der Strathclyde University in Glasgow (2005) veranstaltete Konferenz war dem Thema "Context" im Rahmen der informationsbezogenen Forschung gewidmet, einem komplexen, dynamischen und multidimensionalen Begriff von grosser Bedeutung für das Verhalten und die Interaktion von Mensch und Maschine. . . .
    Am interessantesten und wichtigsten erschien mir der Grundsatzartikel von Peter Ingwersen und Kalervo Järvelin (Kopenhagen/Tampere), The sense of information: Understanding the cognitive conditional information concept in relation to information acquisition (S. 7-19). Hier versuchen die Autoren, den ursprünglich von Ingwersen1 vorgeschlagenen und damals ausschliesslich im Zusammenhang mit dem interaktiven Information Retrieval verwendeten Begriff "conditional cognitive information" anhand eines erweiterten Modells nicht nur auf das Gesamtgebiet von "information seeking and retrieval" (IS&R) auszuweiten, sondern auch auf den menschlichen Informationserwerb aus der Sinneswahrnehmung, wie z.B. im Alltag oder im Rahmen der wissenschaftlichen Erkenntnistätigkeit. Dabei werden auch alternative Informationsbegriffe sowie die Beziehung von Information und Bedeutung diskutiert. Einen ebenfalls auf Ingwersen zurückgehenden Ansatz thematisiert der Beitrag von Birger Larsen (Kopenhagen), indem er sich mit dessen vor über 10 Jahren veröffentlichten2 Principle of Polyrepresentation befasst. Dieses beruht auf der Hypothese, wonach die Überlappung zwischen unterschiedlichen kognitiven Repräsentationen - nämlich jenen der Situation des Informationssuchenden und der Dokumente - zur Reduktion der einer Retrievalsituation anhaftenden Unsicherheit und damit zur Verbesserung der Performance des IR-Systems genutzt werden könne. Das Prinzip stellt die Dokumente, ihre Autoren und Indexierer, aber auch die sie zugänglich machende IT-Lösung in einen umfassenden und kohärenten theoretischen Bezugsrahmen, der die benutzerorientierte Forschungsrichtung "Information-Seeking" mit der systemorientierten IR-Forschung zu integrieren trachtet. Auf der Basis theoretischer Überlegungen sowie der (wenigen) dazu vorliegenden empirischen Studien hält Larsen das Model, das von Ingwersen sowohl für "exact match-IR" als auch für "best match-IR" intendiert war, allerdings schon in seinen Grundzügen für "Boolean" (d.h. "exact match"-orientiert) und schlägt ein "polyrepresentation continuum" als Verbesserungsmöglichkeit vor.
    Mehrere Beiträge befassen sich mit dem Problem der Relevanz. Erica Cosijn und Theo Bothma (Pretoria) argumentieren, dass für das Benutzerverhalten neben der thematischen Relevanz auch verschiedene andere Relevanzdimensionen eine Rolle spielen und schlagen auf der Basis eines (abermals auf Ingwersen zurückgehenden) erweiterten Relevanzmodells vor, dass IR-Systeme die Möglichkeit zur Abgabe auch kognitiver, situativer und sozio-kognitiver Relevanzurteile bieten sollten. Elaine Toms et al. (Kanada) berichten von einer Studie, in der versucht wurde, die schon vor 30 Jahren von Tefko Saracevic3 erstellten fünf Relevanzdimensionen (kognitiv, motivational, situativ, thematisch und algorithmisch) zu operationalisieren und anhand von Recherchen mit einer Web-Suchmaschine zu untersuchen. Die Ergebnisse zeigten, dass sich diese fünf Dimensionen in drei Typen vereinen lassen, die Benutzer, System und Aufgabe repräsentieren. Von einer völlig anderen Seite nähern sich Olof Sundin und Jenny Johannison (Boras, Schweden) der Relevanzthematik, indem sie einen kommunikationsorientierten, neo-pragmatistischen Ansatz (nach Richard Rorty) wählen, um Informationssuche und Relevanz zu analysieren, und dabei auch auf das Werk von Michel Foucault zurückgreifen. Weitere interessante Artikel befassen sich mit Bradford's Law of Scattering (Hjørland & Nicolaisen), Information Sharing and Timing (Widén-Wulff & Davenport), Annotations as Context for Searching Documents (Agosti & Ferro), sowie dem Nutzen von neuen Informationsquellen wie Web Links, Newsgroups und Blogs für die sozial- und informationswissenschaftliche Forschung (Thelwall & Wouters). In Summe liegt hier ein interessantes und anspruchsvolles Buch vor - inhaltlich natürlich nicht gerade einheitlich und geschlossen, doch dies darf man bei einem Konferenzband ohnedies nicht erwarten. Manche der abgedruckten Beiträge sind sicher nicht einfach zu lesen, lohnen aber die Mühe. Auch für Praktiker aus Bibliothek und Information ist einiges dabei, sofern sie sich für die wissenschaftliche Basis ihrer Tätigkeit interessieren. Fachlich einschlägige Spezial- und grössere Allgemeinbibliotheken sollten das Werk daher unbedingt führen.
  17. Weinberger, D.: Everything is miscellaneous : the power of the new digital disorder (2007) 0.00
    0.0037056361 = product of:
      0.01852818 = sum of:
        0.01852818 = weight(_text_:of in 2862) [ClassicSimilarity], result of:
          0.01852818 = score(doc=2862,freq=44.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.28363106 = fieldWeight in 2862, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2862)
      0.2 = coord(1/5)
    
    Abstract
    Human beings are information omnivores: we are constantly collecting, labeling, and organizing data. But today, the shift from the physical to the digital is mixing, burning, and ripping our lives apart. In the past, everything had its one place--the physical world demanded it--but now everything has its places: multiple categories, multiple shelves. Simply put, everything is suddenly miscellaneous. In Everything Is Miscellaneous, David Weinberger charts the new principles of digital order that are remaking business, education, politics, science, and culture. In his rollicking tour of the rise of the miscellaneous, he examines why the Dewey decimal system is stretched to the breaking point, how Rand McNally decides what information not to include in a physical map (and why Google Earth is winning that battle), how Staples stores emulate online shopping to increase sales, why your children's teachers will stop having them memorize facts, and how the shift to digital music stands as the model for the future in virtually every industry. Finally, he shows how by "going miscellaneous," anyone can reap rewards from the deluge of information in modern work and life. From A to Z, Everything Is Miscellaneous will completely reshape the way you think--and what you know--about the world.
    Content
    Inhalt: The new order of order -- Alphabetization and its discontents -- The geography of knowledge -- Lumps and splits -- The laws of the jungle -- Smart leaves -- Social knowing -- What nothing says -- Messiness as a virtue -- The work of knowledge.
    Footnote
    Rez. in: Publishers Weekly. May 2007: "In a high-minded twist on the Internet-has-changed-everything book, Weinberger (Small Pieces Loosely Joined) joins the ranks of social thinkers striving to construct new theories around the success of Google and Wikipedia. Organization or, rather, lack of it, is the key: the author insists that "we have to get rid of the idea that there's a best way of organizing the world." Building on his earlier works' discussions of the Internet-driven shift in power to users and consumers, Weinberger notes that "our homespun ways of maintaining order are going to break-they're already breaking-in the digital world." Today's avalanche of fresh information, Weinberger writes, requires relinquishing control of how we organize pretty much everything; he envisions an ever-changing array of "useful, powerful and beautiful ways to make sense of our world." Perhaps carried away by his thesis, the author gets into extended riffs on topics like the history of classification and the Dewey Decimal System. At the point where readers may want to turn his musings into strategies for living or doing business, he serves up intriguing but not exactly helpful epigrams about "the third order of order" and "useful miscellaneousness." But the book's call to embrace complexity will influence thinking about "the newly miscellanized world.""
  18. Lavrenko, V.: ¬A generative theory of relevance (2009) 0.00
    0.0035690558 = product of:
      0.017845279 = sum of:
        0.017845279 = weight(_text_:of in 3306) [ClassicSimilarity], result of:
          0.017845279 = score(doc=3306,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 3306, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3306)
      0.2 = coord(1/5)
    
    Abstract
    A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
  19. Semantic digital libraries (2009) 0.00
    0.0034969465 = product of:
      0.017484732 = sum of:
        0.017484732 = weight(_text_:of in 3371) [ClassicSimilarity], result of:
          0.017484732 = score(doc=3371,freq=30.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.26765788 = fieldWeight in 3371, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3371)
      0.2 = coord(1/5)
    
    Abstract
    Libraries have always been an inspiration for the standards and technologies developed by semantic web activities. However, except for the Dublin Core specification, semantic web and social networking technologies have not been widely adopted and further developed by major digital library initiatives and projects. Yet semantic technologies offer a new level of flexibility, interoperability, and relationships for digital repositories. Kruk and McDaniel present semantic web-related aspects of current digital library activities, and introduce their functionality; they show examples ranging from general architectural descriptions to detailed usages of specific ontologies, and thus stimulate the awareness of researchers, engineers, and potential users of those technologies. Their presentation is completed by chapters on existing prototype systems such as JeromeDL, BRICKS, and Greenstone, as well as a look into the possible future of semantic digital libraries. This book is aimed at researchers and graduate students in areas like digital libraries, the semantic web, social networks, and information retrieval. This audience will benefit from detailed descriptions of both today's possibilities and also the shortcomings of applying semantic web technologies to large digital repositories of often unstructured data.
    Content
    Inhalt: Introduction to Digital Libraries and Semantic Web: Introduction / Bill McDaniel and Sebastian Ryszard Kruk - Digital Libraries and Knowledge Organization / Dagobert Soergel - Semantic Web and Ontologies / Marcin Synak, Maciej Dabrowski and Sebastian Ryszard Kruk - Social Semantic Information Spaces / John G. Breslin A Vision of Semantic Digital Libraries: Goals of Semantic Digital Libraries / Sebastian Ryszard Kruk and Bill McDaniel - Architecture of Semantic Digital Libraries / Sebastian Ryszard Kruk, Adam Westerki and Ewelina Kruk - Long-time Preservation / Markus Reis Ontologies for Semantic Digital Libraries: Bibliographic Ontology / Maciej Dabrowski, Macin Synak and Sebastian Ryszard Kruk - Community-aware Ontologies / Slawomir Grzonkowski, Sebastian Ryszard Kruk, Adam Gzella, Jakub Demczuk and Bill McDaniel Prototypes of Semantic Digital Libraries: JeromeDL: The Social Semantic Digital Library / Sebastian Ryszard Kruk, Mariusz Cygan, Adam Gzella, Tomasz Woroniecki and Maciej Dabrowski - The BRICKS Digital Library Infrastructure / Bernhard Haslhofer and Predrag Knezevié - Semantics in Greenstone / Annika Hinze, George Buchanan, David Bainbridge and Ian Witten Building the Future - Semantic Digital Libraries in Use: Hyperbooks / Gilles Falquet, Luka Nerima and Jean-Claude Ziswiler - Semantic Digital Libraries for Archiving / Bill McDaniel - Evaluation of Semantic and Social Technologies for Digital Libraries / Sebastian Ryszard Kruk, Ewelina Kruk and Katarzyna Stankiewicz - Conclusions: The Future of Semantic Digital Libraries / Sebastian Ryszard Kruk and Bill McDaniel
  20. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.00
    0.0033959076 = product of:
      0.016979538 = sum of:
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.033959076 = score(doc=1781,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2008 14:35:21

Languages

  • e 39
  • d 7

Types

  • m 47
  • s 21
  • el 1
  • i 1
  • More… Less…

Subjects

Classifications