Search (442 results, page 1 of 23)

  • × type_ss:"el"
  1. Herwijnen, E. van: SGML tutorial (1993) 0.13
    0.1254094 = product of:
      0.18811409 = sum of:
        0.1542928 = weight(_text_:book in 8747) [ClassicSimilarity], result of:
          0.1542928 = score(doc=8747,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.68970716 = fieldWeight in 8747, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.078125 = fieldNorm(doc=8747)
        0.033821285 = product of:
          0.06764257 = sum of:
            0.06764257 = weight(_text_:search in 8747) [ClassicSimilarity], result of:
              0.06764257 = score(doc=8747,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.3840117 = fieldWeight in 8747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8747)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Contains extensive beginning and advanced interactive tutorials and exercises to teach SGML and uses DynaText software to manage, browse and search the text, thus demonstrating the features of one of the most widely known programs available for SGML marked-up text
    Footnote
    Electronic edition of van Herwijnen's book 'Practical SGML'
    Imprint
    Providence, RI : Electronic Book Technologies
  2. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.11
    0.10890546 = product of:
      0.16335818 = sum of:
        0.12246612 = weight(_text_:book in 1184) [ClassicSimilarity], result of:
          0.12246612 = score(doc=1184,freq=28.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5474381 = fieldWeight in 1184, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.040892072 = sum of:
          0.02029277 = weight(_text_:search in 1184) [ClassicSimilarity], result of:
            0.02029277 = score(doc=1184,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.1152035 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.020599304 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.020599304 = score(doc=1184,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.6666667 = coord(2/3)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
    Object
    Google book search
  3. Birmingham, J.: Internet search engines (1996) 0.08
    0.0815798 = product of:
      0.24473938 = sum of:
        0.24473938 = sum of:
          0.16234216 = weight(_text_:search in 5664) [ClassicSimilarity], result of:
            0.16234216 = score(doc=5664,freq=8.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.921628 = fieldWeight in 5664, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.09375 = fieldNorm(doc=5664)
          0.082397215 = weight(_text_:22 in 5664) [ClassicSimilarity], result of:
            0.082397215 = score(doc=5664,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.46428138 = fieldWeight in 5664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=5664)
      0.33333334 = coord(1/3)
    
    Abstract
    Basically a good listing in table format of features from the major search engines
    Content
    Darstellung zu verschiedenen search engines des Internet
    Date
    10.11.1996 16:36:22
    Source
    http://www.stark.k12.oh.us/Docs/search/
  4. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.08
    0.077493265 = product of:
      0.11623989 = sum of:
        0.061717123 = weight(_text_:book in 3608) [ClassicSimilarity], result of:
          0.061717123 = score(doc=3608,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.27588287 = fieldWeight in 3608, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.054522768 = sum of:
          0.027057027 = weight(_text_:search in 3608) [ClassicSimilarity], result of:
            0.027057027 = score(doc=3608,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.15360467 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.027465738 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.027465738 = score(doc=3608,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.6666667 = coord(2/3)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  5. Weinberger, D.: Everything is miscellaneous (2007) 0.08
    0.07524564 = product of:
      0.11286846 = sum of:
        0.09257569 = weight(_text_:book in 1269) [ClassicSimilarity], result of:
          0.09257569 = score(doc=1269,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.41382432 = fieldWeight in 1269, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=1269)
        0.02029277 = product of:
          0.04058554 = sum of:
            0.04058554 = weight(_text_:search in 1269) [ClassicSimilarity], result of:
              0.04058554 = score(doc=1269,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.230407 = fieldWeight in 1269, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1269)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    David Weinberger's new book covers the breakdown of the established order of ordering. He explains how methods of categorization designed for physical objects fail when we can instead put things in multiple categoreis at once, and search them in many ways. This is no dry book on taxonomy, but has the insight and wit you'd expect from the author of The Cluetrain Manifesto, Small Pieces Loosely Joined, and a former writer for Woody Allen. David Weinberger is the co-author of the international bestseller "The Cluetrain Manifesto" and the author of "Small Pieces Loosely Joined". A fellow at Harvard Law School's Berkman Center for the Internet and Society, Weinberger writes for such publications as Wired, The New York Times, Smithsonian, and the Harvard Business Review and is a frequent commentator for NPR's All Things Considered. This event took place May 10, 2007 at Google Headquarters in Mountain View, CA.
  6. Hughes, T.; Acharya, A.: ¬An interview with Anurag Acharya, Google Scholar lead engineer 0.07
    0.0666973 = product of:
      0.10004594 = sum of:
        0.076371044 = weight(_text_:book in 94) [ClassicSimilarity], result of:
          0.076371044 = score(doc=94,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 94, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=94)
        0.0236749 = product of:
          0.0473498 = sum of:
            0.0473498 = weight(_text_:search in 94) [ClassicSimilarity], result of:
              0.0473498 = score(doc=94,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2688082 = fieldWeight in 94, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=94)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    When I interned at Google last summer after getting my MSI degree, I worked on projects for the Book Search and Google Scholar teams. I didn't know it at the time, but in completing my research over the course of the summer, I would become the resident expert on how universities were approaching Google Scholar as a research tool and how they were implementing Scholar on their library websites. Now working at an academic library, I seized a recent opportunity to sit down with Anurag Acharya, Google Scholar's founding engineer, to delve a little deeper into how Scholar features are developed and prioritized, what Scholar's scope and aims are, and where the product is headed. -Tracey Hughes, GIS Coordinator, Social Sciences & Humanities Library, University of California San Diego
  7. Visual thesaurus (2005) 0.07
    0.06500681 = product of:
      0.0975102 = sum of:
        0.061717123 = weight(_text_:book in 1292) [ClassicSimilarity], result of:
          0.061717123 = score(doc=1292,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.27588287 = fieldWeight in 1292, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=1292)
        0.035793085 = product of:
          0.07158617 = sum of:
            0.07158617 = weight(_text_:search in 1292) [ClassicSimilarity], result of:
              0.07158617 = score(doc=1292,freq=14.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.4063998 = fieldWeight in 1292, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1292)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A visual thesaurus system and method for displaying a selected term in association with its one or more meanings, other words to which it is related, and further relationship information. The results of a search are presented in a directed graph that provides more information than an ordered list. When a user selects one of the results, the display reorganizes around the user's search allowing for further searches, without the interruption of going to additional pages.
    Content
    Traditional print reference guides often have two methods of finding information: an order (alphabetical for dictionaries and encyclopedias, by subject hierarchy in the case of thesauri) and indices (ordered lists, with a more complete listing of words and concepts, which refers back to original content from the main body of the book). A user of such traditional print reference guides who is looking for information will either browse through the ordered information in the main body of the reference book, or scan through the indices to find what is necessary. The advent of the computer allows for much more rapid electronic searches of the same information, and for multiple layers of indices. Users can either search through information by entering a keyword, or users can browse through the information through an outline index, which represents the information contained in the main body of the data. There are two traditional user interfaces for such applications. First, the user may type text into a search field and in response, a list of results is returned to the user. The user then selects a returned entry and may page through the resulting information. Alternatively, the user may choose from a list of words from an index. For example, software thesaurus applications, in which a user attempts to find synonyms, antonyms, homonyms, etc. for a selected word, are usually implemented using the conventional search and presentation techniques discussed above. The presentation of results only allows for a one-dimensional order of data at any one time. In addition, only a limited number of results can be shown at once, and selecting a result inevitably leads to another page-if the result is not satisfactory, the users must search again. Finally, it is difficult to present information about the manner in which the search results are related, or to present quantitative information about the results without causing confusion. Therefore, there exists a need for a multidimensional graphical display of information, in particular with respect to information relating to the meaning of words and their relationships to other words. There further exists a need to present large amounts of information in a way that can be manipulated by the user, without the user losing his place. And there exists a need for more fluid, intuitive and powerful thesaurus functionality that invites the exploration of language.
  8. Dunning, A.: Do we still need search engines? (1999) 0.06
    0.0636099 = product of:
      0.19082968 = sum of:
        0.19082968 = sum of:
          0.0946996 = weight(_text_:search in 6021) [ClassicSimilarity], result of:
            0.0946996 = score(doc=6021,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.5376164 = fieldWeight in 6021, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.109375 = fieldNorm(doc=6021)
          0.09613008 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
            0.09613008 = score(doc=6021,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.5416616 = fieldWeight in 6021, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6021)
      0.33333334 = coord(1/3)
    
    Source
    Ariadne. 1999, no.22
  9. Allo, P.; Baumgaertner, B.; D'Alfonso, S.; Fresco, N.; Gobbo, F.; Grubaugh, C.; Iliadis, A.; Illari, P.; Kerr, E.; Primiero, G.; Russo, F.; Schulz, C.; Taddeo, M.; Turilli, M.; Vakarelov, O.; Zenil, H.: ¬The philosophy of information : an introduction (2013) 0.06
    0.06021285 = product of:
      0.090319276 = sum of:
        0.08017289 = weight(_text_:book in 3380) [ClassicSimilarity], result of:
          0.08017289 = score(doc=3380,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.35838234 = fieldWeight in 3380, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3380)
        0.010146385 = product of:
          0.02029277 = sum of:
            0.02029277 = weight(_text_:search in 3380) [ClassicSimilarity], result of:
              0.02029277 = score(doc=3380,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.1152035 = fieldWeight in 3380, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3380)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In April 2010, Bill Gates gave a talk at MIT in which he asked: 'are the brightest minds working on the most important problems?' Gates meant improving the lives of the poorest; improving education, health, and nutrition. We could easily add improving peaceful interactions, human rights, environmental conditions, living standards and so on. Philosophy of Information (PI) proponents think that Gates has a point - but this doesn't mean we should all give up philosophy. Philosophy can be part of this project, because philosophy understood as conceptual design forges and refines the new ideas, theories, and perspectives that we need to understand and address these important problems that press us so urgently. Of course, this naturally invites us to wonder which ideas, theories, and perspectives philosophers should be designing now. In our global information society, many crucial challenges are linked to information and communication technologies: the constant search for novel solutions and improvements demands, in turn, changing conceptual resources to understand and cope with them. Rapid technological development now pervades communication, education, work, entertainment, industrial production and business, healthcare, social relations and armed conflicts. There is a rich mine of philosophical work to do on the new concepts created right here, right now.
    Content
    Vgl. auch unter: http://www.socphilinfo.org/teaching/book-pi-intro: "This book serves as the main reference for an undergraduate course on Philosophy of Information. The book is written to be accessible to the typical undergraduate student of Philosophy and does not require propaedeutic courses in Logic, Epistemology or Ethics. Each chapter includes a rich collection of references for the student interested in furthering her understanding of the topics reviewed in the book. The book covers all the main topics of the Philosophy of Information and it should be considered an overview and not a comprehensive, in-depth analysis of a philosophical area. As a consequence, 'The Philosophy of Information: a Simple Introduction' does not contain research material as it is not aimed at graduate students or researchers. The book is available for free in multiple formats and it is updated every twelve months by the team of the p Research Network: Patrick Allo, Bert Baumgaertner, Anthony Beavers, Simon D'Alfonso, Penny Driscoll, Luciano Floridi, Nir Fresco, Carson Grubaugh, Phyllis Illari, Eric Kerr, Giuseppe Primiero, Federica Russo, Christoph Schulz, Mariarosaria Taddeo, Matteo Turilli, Orlin Vakarelov. (*) The version for 2013 is now available as a pdf. The content of this version will soon be integrated in the redesign of the teaching-section. The beta-version from last year will provisionally remain accessible through the Table of Content on this page."
  10. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.06
    0.055555932 = product of:
      0.083333895 = sum of:
        0.07318751 = weight(_text_:book in 1197) [ClassicSimilarity], result of:
          0.07318751 = score(doc=1197,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.32715684 = fieldWeight in 1197, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
        0.010146385 = product of:
          0.02029277 = sum of:
            0.02029277 = weight(_text_:search in 1197) [ClassicSimilarity], result of:
              0.02029277 = score(doc=1197,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.1152035 = fieldWeight in 1197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  11. Weinberger, D.: Order is in the eye of the tagger (2007) 0.05
    0.05091403 = product of:
      0.15274209 = sum of:
        0.15274209 = weight(_text_:book in 6252) [ClassicSimilarity], result of:
          0.15274209 = score(doc=6252,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.68277526 = fieldWeight in 6252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.109375 = fieldNorm(doc=6252)
      0.33333334 = coord(1/3)
    
    Abstract
    David Weinberger's introduction is an excerpt from his recently published book Everything Is Miscellaneous.
  12. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.05
    0.050163757 = product of:
      0.07524563 = sum of:
        0.061717123 = weight(_text_:book in 1198) [ClassicSimilarity], result of:
          0.061717123 = score(doc=1198,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.27588287 = fieldWeight in 1198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
        0.013528514 = product of:
          0.027057027 = sum of:
            0.027057027 = weight(_text_:search in 1198) [ClassicSimilarity], result of:
              0.027057027 = score(doc=1198,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.15360467 = fieldWeight in 1198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1198)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.
  13. Clark, J.A.; Young, S.W.H.: Building a better book in the browser : using Semantic Web technologies and HTML5 (2015) 0.05
    0.048791673 = product of:
      0.14637502 = sum of:
        0.14637502 = weight(_text_:book in 2116) [ClassicSimilarity], result of:
          0.14637502 = score(doc=2116,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.6543137 = fieldWeight in 2116, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=2116)
      0.33333334 = coord(1/3)
    
    Abstract
    The library as place and service continues to be shaped by the legacy of the book. The book itself has evolved in recent years, with various technologies vying to become the next dominant book form. In this article, we discuss the design and development of our prototype software from Montana State University (MSU) Library for presenting books inside of web browsers. The article outlines the contextual background and technological potential for publishing traditional book content through the web using open standards. Our prototype demonstrates the application of HTML5, structured data with RDFa and Schema.org markup, linked data components using JSON-LD, and an API-driven data model. We examine how this open web model impacts discovery, reading analytics, eBook production, and machine-readability for libraries considering how to unite software development and publishing.
  14. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.04
    0.04471845 = product of:
      0.13415535 = sum of:
        0.13415535 = product of:
          0.40246603 = sum of:
            0.40246603 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40246603 = score(doc=1826,freq=2.0), product of:
                0.42966524 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050679956 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  15. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.04
    0.043820105 = product of:
      0.13146031 = sum of:
        0.13146031 = sum of:
          0.07652883 = weight(_text_:search in 1149) [ClassicSimilarity], result of:
            0.07652883 = score(doc=1149,freq=4.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.43445963 = fieldWeight in 1149, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.054931477 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.054931477 = score(doc=1149,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  16. CD-ROMs in print : an international guide to CD-ROMs, CD-I, CDTV & electronic book products (1994) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 5013) [ClassicSimilarity], result of:
          0.1309218 = score(doc=5013,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 5013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=5013)
      0.33333334 = coord(1/3)
    
  17. Bertelsmann-Universal-Lexikon [CD-ROM-Ausgabe] : das Wissen unserer Zeit von A-Z; mit Graphik und Ton (1993) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 8986) [ClassicSimilarity], result of:
          0.1309218 = score(doc=8986,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 8986, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=8986)
      0.33333334 = coord(1/3)
    
    Series
    BEE-Book
  18. Bertelsmann Universal Lexikon : das Wissen unserer Zeit von A - Z (1993) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 2540) [ClassicSimilarity], result of:
          0.1309218 = score(doc=2540,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 2540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=2540)
      0.33333334 = coord(1/3)
    
    Footnote
    Erste Ausgabe (auch als BEE Book gekennzeichnet)
  19. METS: an overview & tutorial : Metadata Encoding & Transmission Standard (METS) (2001) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 1323) [ClassicSimilarity], result of:
          0.1309218 = score(doc=1323,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 1323, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=1323)
      0.33333334 = coord(1/3)
    
    Abstract
    Maintaining a library of digital objects of necessaryy requires maintaining metadata about those objects. The metadata necessary for successful management and use of digital objeets is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. While a library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note that the book was produced using a Ryobi offset press. The Same cannot be said for a digital version of the saure book. Without structural metadata, the page image or text files comprising the digital work are of little use, and without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides. For internal management purposes, a library must have access to appropriate technical metadata in order to periodically refresh and migrate the data, ensuring the durability of valuable resources.
  20. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.04
    0.041848537 = product of:
      0.0627728 = sum of:
        0.043640595 = weight(_text_:book in 79) [ClassicSimilarity], result of:
          0.043640595 = score(doc=79,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.19507864 = fieldWeight in 79, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
        0.019132208 = product of:
          0.038264416 = sum of:
            0.038264416 = weight(_text_:search in 79) [ClassicSimilarity], result of:
              0.038264416 = score(doc=79,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.21722981 = fieldWeight in 79, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Series
    Lecture notes on data engineering and communications technologies book series; vol.32

Years

Languages

  • e 320
  • d 108
  • el 4
  • a 3
  • es 1
  • nl 1
  • More… Less…

Types

  • a 206
  • i 20
  • m 7
  • x 5
  • r 4
  • s 4
  • b 3
  • n 3
  • p 3
  • More… Less…

Themes