Search (83 results, page 4 of 5)

  • × classification_ss:"06.74 / Informationssysteme"
  1. ¬Die wunderbare Wissensvermehrung : wie Open-Innovation unsere Welt revolutioniert (2006) 0.00
    0.0019960706 = product of:
      0.010978388 = sum of:
        0.00924237 = product of:
          0.03696948 = sum of:
            0.03696948 = weight(_text_:o in 115) [ClassicSimilarity], result of:
              0.03696948 = score(doc=115,freq=2.0), product of:
                0.13338262 = queryWeight, product of:
                  5.017288 = idf(docFreq=795, maxDocs=44218)
                  0.026584605 = queryNorm
                0.27716863 = fieldWeight in 115, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.017288 = idf(docFreq=795, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=115)
          0.25 = coord(1/4)
        0.0017360178 = weight(_text_:s in 115) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=115,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 115, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
      0.18181819 = coord(2/11)
    
    Editor
    Drossou, O. u.a.
    Pages
    VI, 186 S
  2. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.00
    0.0019719787 = product of:
      0.0072305882 = sum of:
        0.0038459331 = product of:
          0.0076918663 = sum of:
            0.0076918663 = weight(_text_:h in 6) [ClassicSimilarity], result of:
              0.0076918663 = score(doc=6,freq=4.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.11645855 = fieldWeight in 6, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=6)
          0.5 = coord(1/2)
        0.0023430442 = weight(_text_:a in 6) [ClassicSimilarity], result of:
          0.0023430442 = score(doc=6,freq=8.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.07643694 = fieldWeight in 6, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
        0.0010416106 = weight(_text_:s in 6) [ClassicSimilarity], result of:
          0.0010416106 = score(doc=6,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.036037173 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.27272728 = coord(3/11)
    
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Pages
    X, 224 S
  3. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.001969536 = product of:
      0.007221632 = sum of:
        0.0018129903 = product of:
          0.0036259806 = sum of:
            0.0036259806 = weight(_text_:h in 1796) [ClassicSimilarity], result of:
              0.0036259806 = score(doc=1796,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.05489909 = fieldWeight in 1796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
        0.0042058933 = weight(_text_:a in 1796) [ClassicSimilarity], result of:
          0.0042058933 = score(doc=1796,freq=58.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.1372085 = fieldWeight in 1796, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
        0.0012027485 = weight(_text_:s in 1796) [ClassicSimilarity], result of:
          0.0012027485 = score(doc=1796,freq=6.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.04161215 = fieldWeight in 1796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.27272728 = coord(3/11)
    
    Abstract
    In this collection of nearly 50 articles written by librarians, computer specialists, and other information professionals, the reader finds 10 chapters, each devoted to a problem or a side effect that has emerged since the introduction of the Internet: control over selection, survival of the book, training users, adapting to users' expectations, access issues, cost of technology, continuous retraining, legal issues, disappearing data, and how to avoid becoming blind sided. After stating a problem, each chapter offers solutions that are subsequently supported by articles. The editor's comments, which appear throughout the text, are an added bonus, as are the sections concluding the book, among them a listing of useful URLs, a works-cited section, and a comprehensive index. This book has much to recommend it, especially the articles, which are not only informative, thought-provoking, and interesting but highly readable and accessible as well. An indispensable tool for all librarians.
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
    Some of the pieces are more captivating than others and less "how-to" in nature, providing contextual discussions as well as pragmatic advice. For example, Darlene Fichter's "Blogging Your Life Away" is an interesting discussion about creating and maintaining blogs. (For those unfamiliar with the term, blogs are frequently updated Web pages that ]ist thematically tied annotated links or lists, such as a blog of "Great Websites of the Week" or of "Fun Things to Do This Month in Patterson, New Jersey.") Fichter's article includes descriptions of sample blogs and a comparison of commercially available blog creation software. Another article of note is Kelly Broughton's detailed account of her library's experiences in initiating Web-based reference in an academic library. "Our Experiment in Online Real-Time Reference" details the decisions and issues that the Jerome Library staff at Bowling Green State University faced in setting up a chat reference service. It might be useful to those finding themselves in the same situation. This volume is at its best when it eschews pragmatic information and delves into the deeper, less ephemeral libraryrelated issues created by the rise of the Internet and of the Web. One of the most thought-provoking topics covered is the issue of "the serials pricing crisis," or the increase in subscription prices to journals that publish scholarly work. The pros and cons of moving toward a more free-access Web-based system for the dissemination of peer-reviewed material and of using university Web sites to house scholars' other works are discussed. However, deeper discussions such as these are few, leaving the volume subject to rapid aging, and leaving it with an audience limited to librarians looking for fast technological fixes."
    Pages
    xiii, 380 S
    Type
    s
  4. Hare, C.E.; McLeod, J.: How to manage records in the e-environment : 2nd ed. (2006) 0.00
    0.001302741 = product of:
      0.0071650753 = sum of:
        0.0047346503 = weight(_text_:a in 1749) [ClassicSimilarity], result of:
          0.0047346503 = score(doc=1749,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.1544581 = fieldWeight in 1749, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1749)
        0.0024304248 = weight(_text_:s in 1749) [ClassicSimilarity], result of:
          0.0024304248 = score(doc=1749,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08408674 = fieldWeight in 1749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1749)
      0.18181819 = coord(2/11)
    
    Abstract
    A practical approach to developing and operating an effective programme to manage hybrid records within an organization. This title positions records management as an integral business function linked to the organisation's business aims and objectives. The authors also address the records requirements of new and significant pieces of legislation, such as data protection and freedom of information, as well as exploring strategies for managing electronic records. Bullet points, checklists and examples assist the reader throughout, making this a one-stop resource for information in this area.
    Footnote
    1. Aufl. u.d.T.: Developing a records management programme
    Pages
    X, 174 S
  5. Floridi, L.: Philosophy and computing : an introduction (1999) 0.00
    0.0011094587 = product of:
      0.006102023 = sum of:
        0.0043660053 = weight(_text_:a in 823) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=823,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 823, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=823)
        0.0017360178 = weight(_text_:s in 823) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=823,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 823, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=823)
      0.18181819 = coord(2/11)
    
    Abstract
    Philosophy and Computing explores each of the following areas of technology: the digital revolution; the computer; the Internet and the Web; CD-ROMs and Mulitmedia; databases, textbases, and hypertexts; Artificial Intelligence; the future of computing. Luciano Floridi shows us how the relationship between philosophy and computing provokes a wide range of philosophical questions: is there a philosophy of information? What can be achieved by a classic computer? How can we define complexity? What are the limits of quantam computers? Is the Internet an intellectual space or a polluted environment? What is the paradox in the Strong Artificial Intlligence program? Philosophy and Computing is essential reading for anyone wishing to fully understand both the development and history of information and communication technology as well as the philosophical issues it ultimately raises. 'The most careful and scholarly book to be written on castles in a generation.'
    Pages
    XIV, 242 S
  6. Research and advanced technology for digital libraries : 9th European conference, ECDL 2005, Vienna, Austria, September 18 - 23, 2005 ; proceedings (2005) 0.00
    8.4901723E-4 = product of:
      0.0046695946 = sum of:
        0.0027055144 = weight(_text_:a in 2423) [ClassicSimilarity], result of:
          0.0027055144 = score(doc=2423,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.088261776 = fieldWeight in 2423, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2423)
        0.00196408 = weight(_text_:s in 2423) [ClassicSimilarity], result of:
          0.00196408 = score(doc=2423,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.06795235 = fieldWeight in 2423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=2423)
      0.18181819 = coord(2/11)
    
    Abstract
    This book constitutes the refereed proceedings of the 9th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2005, held in Vienna, Austria in September 2005. The 41 revised full papers presented together with 2 panel papers and 30 revised poster papers were carefully reviewed and selected from a total of 162 submissions. The papers are organized in topical sections on digital library models and architectures, multimedia and hypermedia digital libraries, XML, building digital libraries, user studies, digital preservation, metadata, digital libraries and e-learning, text classification in digital libraries, searching, and text digital libraries.
    Content
    Inhalt u.a.: - Digital Library Models and Architectures - Multimedia and Hypermedia Digital Libraries - XML - Building Digital Libraries - User Studies - Digital Preservation - Metadata - Digital Libraries and e-Learning - Text Classification in Digital Libraries - Searching - - Focused Crawling Using Latent Semantic Indexing - An Application for Vertical Search Engines / George Almpanidis, Constantine Kotropoulos, Ioannis Pitas - - Active Support for Query Formulation in Virtual Digital Libraries: A Case Study with DAFFODIL / Andre Schaefer, Matthias Jordan, Claus-Peter Klas, Norbert Fuhr - - Expression of Z39.50 Supported Search Capabilities by Applying Formal Descriptions / Michalis Sfakakis, Sarantos Kapidakis - Text Digital Libraries
    Editor
    Rauber, A. et.al.
    Pages
    XVIII, 545 S
    Type
    s
  7. Colomb, R.M.: Information spaces : the architecture of cyberspace (2002) 0.00
    8.176949E-4 = product of:
      0.004497322 = sum of:
        0.0027613041 = weight(_text_:a in 262) [ClassicSimilarity], result of:
          0.0027613041 = score(doc=262,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.090081796 = fieldWeight in 262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=262)
        0.0017360178 = weight(_text_:s in 262) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=262,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=262)
      0.18181819 = coord(2/11)
    
    Abstract
    The Architecture of Cyberspace is aimed at students taking information management as a minor in their course as well as those who manage document collections but who are not professional librarians. The first part of this book looks at how users find documents and the problems they have; the second part discusses how to manage the information space using various tools such as classification and controlled vocabularies. It also explores the general issues of publishing, including legal considerations, as well the main issues of creating and managing archives. Supported by exercises and discussion questions at the end of each chapter, the book includes some sample assignments suitable for use with students of this subject. A glossary is also provided to help readers understand the specialised vocabulary and the key concepts in the design and assessment of information spaces.
    Pages
    XVI, 256 S
  8. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.00
    8.047755E-4 = product of:
      0.004426265 = sum of:
        0.0023430442 = weight(_text_:a in 3185) [ClassicSimilarity], result of:
          0.0023430442 = score(doc=3185,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.07643694 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
        0.0020832212 = weight(_text_:s in 3185) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=3185,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
      0.18181819 = coord(2/11)
    
    Abstract
    With this title, readers learn to push the search engine to its limits and extract the best content from Google, without having to learn complicated code. "Google Power" takes Google users under the hood, and teaches them a wide range of advanced web search techniques, through practical examples. Its content is organised by topic, so reader learns how to conduct in-depth searches on the most popular search topics, from health to government listings to people.
    Pages
    XXII, 434 S
  9. Kompendium Informationsdesign (2008) 0.00
    8.047755E-4 = product of:
      0.004426265 = sum of:
        0.0023430442 = weight(_text_:a in 183) [ClassicSimilarity], result of:
          0.0023430442 = score(doc=183,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.07643694 = fieldWeight in 183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=183)
        0.0020832212 = weight(_text_:s in 183) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=183,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 183, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=183)
      0.18181819 = coord(2/11)
    
    Content
    Mit Beiträgen von: Remo A. Burkhard, Michael Burmester, Gerhard M. Buurman, Josef Gründler, Frank Hartmann, Christian Jaquet, Roland Mangold, Daniel Perrin, Maja Pivec, Peter Simlinger, Karl Stocker, Frank Thissen, Erika Thümmel, Andreas Uebele, Stefano M. Vannotti, Wibke Weber, Jörg Westbomke
    Pages
    556 S
  10. Survey of text mining : clustering, classification, and retrieval (2004) 0.00
    8.0138847E-4 = product of:
      0.0044076364 = sum of:
        0.0019525366 = weight(_text_:a in 804) [ClassicSimilarity], result of:
          0.0019525366 = score(doc=804,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.06369744 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.0024550997 = weight(_text_:s in 804) [ClassicSimilarity], result of:
          0.0024550997 = score(doc=804,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08494043 = fieldWeight in 804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
      0.18181819 = coord(2/11)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
    Pages
    XVII, 244 S
    Type
    s
  11. Alby, T.: Web 2.0 : Konzepte, Anwendungen, Technologien; [ajax, api, atom, blog, folksonomy, feeds, long tail, mashup, permalink, podcast, rich user experience, rss, social software, tagging] (2007) 0.00
    7.6228095E-4 = product of:
      0.004192545 = sum of:
        0.0027194852 = product of:
          0.0054389704 = sum of:
            0.0054389704 = weight(_text_:h in 296) [ClassicSimilarity], result of:
              0.0054389704 = score(doc=296,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.08234863 = fieldWeight in 296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=296)
          0.5 = coord(1/2)
        0.00147306 = weight(_text_:s in 296) [ClassicSimilarity], result of:
          0.00147306 = score(doc=296,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.050964262 = fieldWeight in 296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0234375 = fieldNorm(doc=296)
      0.18181819 = coord(2/11)
    
    Footnote
    Rez. in: Mitt VÖB 60(2007) H.3, S.85-86 (M. Buzinkay): "Ein aktuelles Thema der Informationsbranche ist Web 2.0. Für die einen Hype, für andere Web-Realität, ist das Web 2.0 seit wenigen Jahren das "neue Web". Der Autor, Tom Alby, versucht daher im ersten Kapitel auch einen Unterschied zum Vorgänger-Web aufzubauen: Was ist so anders im Web 2.0? In weiterer Folge handelt Alby alle Themen ab, die mit Web 2.0 in Verbindung gebracht werden: Blogging, Podcasting, Social Software, Folksonomies, das Web als Plattform und diverse Web 2.0 typische Technologien. Ein Ausblick auf das Web 3.0 darf auch nicht fehlen. Das Buch liefert hier die notwendigen Einführungen und "Brücken", um auch als Laie zumindest ansatzweise Verständnis für diese neuen Entwicklungen aufzubringen. Daher ist es nur konsequent und sehr passend, dass Alby neben seinem technischen Fachjargon auch leicht verständliche Einführungsbeispiele bereithält. Denn es geht Alby weniger um Technologie und Tools (diese werden aber auch behandelt, eben beispielhaft), sondern vor allem um Konzepte: Was will das Web 2.0 überhaupt und was macht seinen Erfolg aus? Das Buch ist einfach zu lesen, mit zahlreichen Illustrationen bebildert und listet eine Unmenge an online Quellen für eine weitere Vertiefung auf. Doch mit Büchern über das Web ist es genauso wie dem Web selbst: die Halbwertszeit ist sehr kurz. Das gilt insbesondere für die Technik und für mögliche Dienste. Alby hat diesen technischen Zweig der Web 2.0-Geschichte so umfangreich wie für das Verständnis nötig, aus Gründen der Aktualität aber so gering wie möglich ausfallen lassen. Und das ist gut so: dieses Buch können Sie getrost auch in drei Jahren in die Hand nehmen. Es wird zwar andere Dienste geben als im Buch angegeben, und manche Links werden vielleicht nicht mehr funktionieren, die Prinzipien des Web 2.0 bleiben aber dieselben. Sollten Sie sich geändert haben, dann haben wir schon Web 2.x oder gar Web 3.0. Aber das ist eine andere Geschichte, die uns vielleicht Tom Alby zur angemessenen Zeit weitergeben möchte. Ein Bonus, wie ich finde, sind die zahlreichen Interviews, die Tom Alby mit bekannten deutschen Web 2.0 Aushängeschildern geführt hat. Sie geben einen guten Einblick, welchen Stellenwert Web 2.0 in der Zwischenzeit gewonnen hat. Nicht nur in einer Nische von Web-Freaks, sondern in der Welt der Kommunikation. Und das sind wir."
    Pages
    XIV, 245 S
  12. Jeanneney, J.-N.: Googles Herausforderung : Für eine europäische Bibliothek (2006) 0.00
    7.0840213E-4 = product of:
      0.0038962115 = sum of:
        0.0018129903 = product of:
          0.0036259806 = sum of:
            0.0036259806 = weight(_text_:h in 46) [ClassicSimilarity], result of:
              0.0036259806 = score(doc=46,freq=2.0), product of:
                0.0660481 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.026584605 = queryNorm
                0.05489909 = fieldWeight in 46, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.015625 = fieldNorm(doc=46)
          0.5 = coord(1/2)
        0.0020832212 = weight(_text_:s in 46) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=46,freq=18.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 46, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.015625 = fieldNorm(doc=46)
      0.18181819 = coord(2/11)
    
    Footnote
    Rez. in: Frankfurter Rundschau. Nr.66 vom 18.3.2006, S.17. (M. Meister): "Es ist eine kleine Kampfschrift, ein Pamphlet, geschrieben mit heißer, französischer Feder. Doch in Deutschland tut man es bereits ab: als Fortsetzung des müßigen Kulturkampfes zwischen Frankreich und Amerika, als "kulturkritisches Ressentiment". "Wo ist der Skandal", titelte die Süddeutsche Zeitung, als Jean-Noel Jeanneney sein Buch Googles Herausforderung unlängst in der französischen Botschaft in Berlin vorstellte. Man kann hierzulande nichts Böses darin erkennen, wenn die amerikanische Firma Google gemeinsam mit vier renommierten amerikanischen und der Universitätsbibliothek von Oxford innerhalb weniger Jahre 15 Millionen Bücher digitalisieren will. "So what", stöhnen die Deutschen, während die Franzosen auf die Barrikaden steigen. Aus französischer Perspektive verbirgt sich hinter dem im Winter 2004 angekündigten Projekt, "Google Print" mit Namen, tatsächlich ein kulturelles Schreckenszenario, dessen Folgen in der Öffentlichkeit breit diskutiert wurden. Von kultureller Hegemonie war die Rede, von der ewigen Dominanz der Amerikaner über die Europäer, sprich des Geldes über die Kultur. Der in Frankreich lebende Schriftsteller Alberto Manguel sah sogar den Albtraum seines Kollegen Jorge Louis Borges verwirklicht, der in seiner Erzählung Die Bibliothek von Babel genau davon geträumt hatte: eine Bibliothek, in der alles vorhanden ist, so viele Bücher, dass man kein einziges mehr wird finden können. Wo der Skandal ist? Nirgendwo, würde Jeanneney antworten. Denn darum geht es ihm tatsächlich nicht. Er plädiert vielmehr dafür, die Herausforderung anzunehmen, und Google das Feld nicht allein zu überlassen. Jeanneney, Leiter der französischen Nationalbibliothek und zweifellos Kenner der Materie, beschreibt deshalb eindringlich die Konsequenzen einer Digitalisierung des schriftlichen Kulturerbes unter amerikanisch-kommerziellem Monopol. Er hat diese kurze Kampfschrift, die soeben auf Deutsch bei Wagenbach erschienen ist, gewissermaßen als Flaschenpost benutzt, um die Verantwortlichen anderer Länder aufzurütteln und für ein gemeinsames Projekt zu gewinnen.
    Weitere Rez. in: ZfBB 53(2006) H.3/4, S.215-217 (M. Hollender): "Die Aversion des Präsidenten der Französischen Nationalbibliothek, Jean-Noël Jeanneney, gegen die Pläne von Google zur Massendigitalisierung kann nach der breiten Erörterung in der Tagespresse als zumindest in Grundzügen bekannt vorausgesetzt werden. Nunmehr liegt seine im März 2005 entstandene »Kampfschrift« (S.7) aktualisiert und mit einem Nachwort von Klaus-Dieter Lehmann versehen, auch in einer deutschen Übersetzung vor. So viel vorab: selten erhält man für 9,90 Euro so wenig und zugleich so viel: so viel Polemik, Selbstpreisgabe und Emphase und so wenig konkrete strategisch weiterführende Ideen. Dem Leser fällt vor allem der plumpe Antiamerikanismus, der dem gesamten Büchlein zugrunde liegt, über kurz oder lang unangenehm auf. Jeanneney moniert die »unvermeidliche amerikanische Ichbezogenheit« (S. 9). Wer aber mag es Google verdenken, sein Projekt zunächst mit angloamerikanischen Bibliotheken zu starten? Die Bereitschaft der britischen Boolean Library, ihre exzellenten Bestände vor 1900 von Google ebenfalls digitalisieren zu lassen, wird von Jeanneney im Stile einer Verschwörungstheorie kommentiert: »Wieder einmal wurde uns die altbekannte angloamerikanische Solidarität vorgeführt.« (S.19) Mit derselben Emphase könnte man sich darüber freuen, dass Google sich der Bestände hochbedeutender Forschungsbibliotheken versichert - nicht aber Jeanneney. Fazit: die »US-Dominanz, die mit einer mehr oder weniger bewussten Arroganz einhergeht«, bewirke, dass »alles, was der amerikanischen Weltsicht widerspricht, aussortiert« werde (S. 23). Wer derart voreingenommen wie Jeanneney an die Google-Pläne herangeht, verbaut sich selber die Chancen auf eine konstruktive und kooperative Lösung des Google-Problems. ...
    Es empfiehlt sich, an die Google-Vorhaben mit einer gehörigen Portion Unvoreingenommenheit heranzutreten und von einem Projekt, das noch in den Kinderschuhen steckt, keine Wunderdinge zu erwarten; unbestreitbare Leistungen aber auch als solche würdigend anzuerkennen. ... Das in Digitalisierungsfragen noch immer schläfrige, wenn nicht gar schlafende Europa, ist zweifellos zunächst von Google geweckt und anschließend von Jeanneney alarmiert worden. Jeanneney hat aus einem zunächst harmlos anmutenden privatwirtschaftlichen Vorhaben ein Politikum gemacht; dass er hierbei immer wieder über sein hehres Ziel einer europäischen Gegenoffensive hinausschießt, kann die Debatte nur beleben. Er wendet sich gegen den neoliberalen Glauben, die Kräfte des freien kapitalistischen Marktes seien in der Lage, allen Seiten gerecht zu werden, und fordert eine Dominanz des staatlichen Sektors, der zumindest komplementär tätig werden müsse, um die Auswüchse von Google gegensteuernd zu bremsen. Dort, wo Jeanneney die antiamerikanische Schelte verlässt und die europäische Antwort skizziert, zeigen sich seine Stärken. Google arbeitet zwar mit Bibliotheken zusammen, ob die Intensität dieser Kooperation aber ausreichend hoch ist, um bewährte bibliothekarische Standards verwirklichen zu können, ist zumindest fraglich. Die >Suchmaske> erlaubt keine spezifizierenden Anfragen; die formale Erschließung der digitalisierten Werke ist völlig unzureichend; eine inhaltliche Erschließung existiert nicht. Hier könnten die europäischen Bibliothekare in der Tat ihre spezifischen Kenntnisse einbringen und statt der von Jeanneney kritisierten Bereitstellung »zusammenhangsloser] Wissensfragmente« (S.14) eine Anreicherung der Digitalisate mit Metadaten anbieten, die das Datenmeer filtert. Wer aber - in der Bibliothekslandschaft sicherlich unstrittig - die exakte Katalogisierung der Digitalisate und ihre Einbindung in Bibliothekskataloge wünscht, damit die Bücher nicht nur über Google, sondern auch über die Portale und Katalogverbünde zugänglich sind, sollte auf Google zugehen, anstatt Google zu provozieren.
    Vgl. auch Jeanneney, J.-N., M. Meister: Ein Kind der kommerziellen Logik: Der Präsident der Pariser Bibliothèque Nationale de France, Jean-Noël Jeanneney, über "Google print" und eine virtuelle, europäische Bibliothek. [Interview]. In: Frankfurter Rundschau. Nr.208 vom 7.9.2005, S.17.
    Pages
    115 S
  13. Research and advanced technology for digital libraries : 11th European conference, ECDL 2007 / Budapest, Hungary, September 16-21, 2007, proceedings (2007) 0.00
    6.411108E-4 = product of:
      0.0035261093 = sum of:
        0.0015620294 = weight(_text_:a in 2430) [ClassicSimilarity], result of:
          0.0015620294 = score(doc=2430,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.050957955 = fieldWeight in 2430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2430)
        0.00196408 = weight(_text_:s in 2430) [ClassicSimilarity], result of:
          0.00196408 = score(doc=2430,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.06795235 = fieldWeight in 2430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=2430)
      0.18181819 = coord(2/11)
    
    Abstract
    This book constitutes the refereed proceedings of the 11th European Conference on Research and Advanced Technology for Digital Libraries, ECDL 2007, held in Budapest, Hungary, in September 2007. The 36 revised full papers presented together with the extended abstracts of 36 revised poster, demo papers and 2 panel descriptions were carefully reviewed and selected from a total of 153 submissions. The papers are organized in topical sections on ontologies, digital libraries and the web, models, multimedia and multilingual DLs, grid and peer-to-peer, preservation, user interfaces, document linking, information retrieval, personal information management, new DL applications, and user studies.
    Pages
    XVII, 585 S
    Type
    s
  14. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (2005) 0.00
    5.36517E-4 = product of:
      0.0029508434 = sum of:
        0.0015620294 = weight(_text_:a in 7) [ClassicSimilarity], result of:
          0.0015620294 = score(doc=7,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.050957955 = fieldWeight in 7, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
        0.0013888142 = weight(_text_:s in 7) [ClassicSimilarity], result of:
          0.0013888142 = score(doc=7,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.048049565 = fieldWeight in 7, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=7)
      0.18181819 = coord(2/11)
    
    Abstract
    The second edition of Understanding Search Engines: Mathematical Modeling and Text Retrieval follows the basic premise of the first edition by discussing many of the key design issues for building search engines and emphasizing the important role that applied mathematics can play in improving information retrieval. The authors discuss important data structures, algorithms, and software as well as user-centered issues such as interfaces, manual indexing, and document preparation. Significant changes bring the text up to date on current information retrieval methods: for example the addition of a new chapter on link-structure algorithms used in search engines such as Google. The chapter on user interface has been rewritten to specifically focus on search engine usability. In addition the authors have added new recommendations for further reading and expanded the bibliography, and have updated and streamlined the index to make it more reader friendly.
    Pages
    XVII, 117 S
  15. Kempa, S.: Oualität von Online-Fachinformation (2002) 0.00
    2.678291E-4 = product of:
      0.00294612 = sum of:
        0.00294612 = weight(_text_:s in 1743) [ClassicSimilarity], result of:
          0.00294612 = score(doc=1743,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.101928525 = fieldWeight in 1743, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1743)
      0.09090909 = coord(1/11)
    
    Pages
    214 S
  16. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.00
    2.5251167E-4 = product of:
      0.0027776284 = sum of:
        0.0027776284 = weight(_text_:s in 2556) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=2556,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
      0.09090909 = coord(1/11)
    
    Pages
    276 S
  17. Braun, E.: ¬The Internet directory : [the guide with the most complete listings for: 1500+ Internet and Bitnet mailing lists, 2700+ Usenet newsgroups, 1000+ On-line library catalogs (OPACs) ...] (1994) 0.00
    2.5251167E-4 = product of:
      0.0027776284 = sum of:
        0.0027776284 = weight(_text_:s in 1549) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=1549,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 1549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=1549)
      0.09090909 = coord(1/11)
    
    Pages
    xxii, 704 S
  18. Prestipino, M.: ¬Die virtuelle Gemeinschaft als Informationssystem : Informationsqualität nutzergenerierter Inhalte in der Domäne Tourismus (2010) 0.00
    2.5251167E-4 = product of:
      0.0027776284 = sum of:
        0.0027776284 = weight(_text_:s in 30) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=30,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=30)
      0.09090909 = coord(1/11)
    
    Pages
    300 S
  19. Verfügbarkeit von Informationen : 60. Jahrestagung der DGI, Frankfurt am Main, 15. bis 17. Oktober 2008 / 30. Online-Tagung der DGI. Hrsg. von Marlies Ockenfeld. DGI, Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis (2008) 0.00
    2.2319089E-4 = product of:
      0.0024550997 = sum of:
        0.0024550997 = weight(_text_:s in 2470) [ClassicSimilarity], result of:
          0.0024550997 = score(doc=2470,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08494043 = fieldWeight in 2470, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2470)
      0.09090909 = coord(1/11)
    
    Pages
    296 S
    Type
    s
  20. Berry, M.W.; Browne, M.: Understanding search engines : mathematical modeling and text retrieval (1999) 0.00
    1.8938375E-4 = product of:
      0.0020832212 = sum of:
        0.0020832212 = weight(_text_:s in 5777) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=5777,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 5777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=5777)
      0.09090909 = coord(1/11)
    
    Pages
    XIII, 116 S

Years

Languages

  • e 47
  • d 37

Types

  • m 82
  • s 26
  • i 2
  • d 1
  • el 1
  • r 1
  • More… Less…

Subjects

Classifications