Search (210 results, page 1 of 11)

  • × type_ss:"el"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.08135515 = product of:
      0.4067757 = sum of:
        0.4067757 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.4067757 = score(doc=1826,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.07
    0.067064844 = product of:
      0.1676621 = sum of:
        0.13990225 = weight(_text_:books in 3608) [ClassicSimilarity], result of:
          0.13990225 = score(doc=3608,freq=14.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.565117 = fieldWeight in 3608, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.027759846 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
          0.027759846 = score(doc=3608,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.15476047 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
      0.4 = coord(2/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Object
    Google books
    Source
    https://www.theatlantic.com/technology/archive/2017/04/the-tragedy-of-google-books/523320/
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.07
    0.065084115 = product of:
      0.32542056 = sum of:
        0.32542056 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.32542056 = score(doc=230,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.2 = coord(1/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Books in print plus with Book reviews plus : BIP + REV (1993) 0.05
    0.04579376 = product of:
      0.2289688 = sum of:
        0.2289688 = weight(_text_:books in 1302) [ClassicSimilarity], result of:
          0.2289688 = score(doc=1302,freq=6.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.9248898 = fieldWeight in 1302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.078125 = fieldNorm(doc=1302)
      0.2 = coord(1/5)
    
    Content
    Erscheint monatlich und enthält als CD-ROM Ausgabe: Books in print, Paperbound Books in print, Forthcoming BIP, Children's BIP sowie 200000 Volltext-Buchbesprechungen
  5. Snowhill, L.: E-books and their future in academic libraries (2001) 0.05
    0.045333505 = product of:
      0.22666752 = sum of:
        0.22666752 = weight(_text_:books in 1218) [ClassicSimilarity], result of:
          0.22666752 = score(doc=1218,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.9155941 = fieldWeight in 1218, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1218)
      0.2 = coord(1/5)
    
    Abstract
    The University of California's California Digital Library (CDL) formed an Ebook Task Force in August 2000 to evaluate academic libraries' experiences with electronic books (e-books), investigate the e-book market, and develop operating guidelines, principles and potential strategies for further exploration of the use of e-books at the University of California (UC). This article, based on the findings and recommendations of the Task Force Report, briefly summarizes task force findings, and outlines issues and recommendations for making e-books viable over the long term in the academic environment, based on the long-term goals of building strong research collections and providing high level services and collections to its users.
    Object
    E-books
  6. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.04
    0.040677574 = product of:
      0.20338786 = sum of:
        0.20338786 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
          0.20338786 = score(doc=4388,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.46834838 = fieldWeight in 4388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4388)
      0.2 = coord(1/5)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  7. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.040677574 = product of:
      0.20338786 = sum of:
        0.20338786 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
          0.20338786 = score(doc=5669,freq=2.0), product of:
            0.43426615 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051222645 = queryNorm
            0.46834838 = fieldWeight in 5669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.2 = coord(1/5)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  8. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.04
    0.040318966 = product of:
      0.100797415 = sum of:
        0.0660976 = weight(_text_:books in 1291) [ClassicSimilarity], result of:
          0.0660976 = score(doc=1291,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.2669927 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.03469981 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
          0.03469981 = score(doc=1291,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.19345059 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
      0.4 = coord(2/5)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  9. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.04
    0.035804212 = product of:
      0.08951053 = sum of:
        0.06869064 = weight(_text_:books in 1184) [ClassicSimilarity], result of:
          0.06869064 = score(doc=1184,freq=6.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.27746695 = fieldWeight in 1184, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.020819884 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
          0.020819884 = score(doc=1184,freq=2.0), product of:
            0.17937298 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051222645 = queryNorm
            0.116070345 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
      0.4 = coord(2/5)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  10. "Google Books" darf weitermachen wie bisher : Entscheidung des Supreme Court in den USA (2016) 0.03
    0.032381076 = product of:
      0.16190538 = sum of:
        0.16190538 = weight(_text_:books in 2923) [ClassicSimilarity], result of:
          0.16190538 = score(doc=2923,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6539958 = fieldWeight in 2923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2923)
      0.2 = coord(1/5)
    
    Abstract
    Der Internet-Riese darf sein Projekt "Google Books" wie gehabt fortsetzen. Der Oberste US-Gerichtshof lehnte die von einer Autorenvereingung geforderte Revision ab. Google teste mit seinem Projekt zwar die Grenzen der Fairness aus, handele aber rechtens, sagten die Richter.
    Content
    " Im Streit mit Google um Urheberrechte ist eine Gruppe von Buchautoren am Obersten US-Gericht gescheitert. Der Supreme Court lehnte es ab, die google-freundliche Entscheidung eines niederen Gerichtes zur Revision zuzulassen. In dem Fall geht es um die Online-Bibliothek "Google Books", für die der kalifornische Konzern Gerichtsunterlagen zufolge mehr als 20 Millionen Bücher digitalisiert hat. Durch das Projekt können Internet-Nutzer innerhalb der Bücher nach Stichworten suchen und die entsprechenden Textstellen lesen. Die drei zuständigen Richter entschieden einstimmig, dass in dem Fall zwar die Grenzen der Fairness ausgetestet würden, aber das Vorgehen von Google letztlich rechtens sei. Entschädigungen in Milliardenhöhe gefürchtet Die von dem Interessensverband Authors Guild angeführten Kläger sahen ihre Urheberrechte durch "Google Books" verletzt. Dazu gehörten auch prominente Künstler wie die Schriftstellerin und Dichterin Margaret Atwood. Google führte dagegen an, die Internet-Bibliothek kurbele den Bücherverkauf an, weil Leser dadurch zusätzlich auf interessante Werke aufmerksam gemacht würden. Google reagierte "dankbar" auf die Entscheidung des Supreme Court. Der Konzern hatte befürchtet, bei einer juristischen Niederlage Entschädigungen in Milliardenhöhe zahlen zu müssen."
    Object
    Google books
    Source
    https://www.tagesschau.de/wirtschaft/google-books-entscheidung-101.html
  11. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.03
    0.032381076 = product of:
      0.16190538 = sum of:
        0.16190538 = weight(_text_:books in 3870) [ClassicSimilarity], result of:
          0.16190538 = score(doc=3870,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6539958 = fieldWeight in 3870, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.2 = coord(1/5)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  12. German Business CD-ROM for professional contacts : Die Datenbank der deutschen Wirtschaft (1996) 0.03
    0.03172685 = product of:
      0.15863423 = sum of:
        0.15863423 = weight(_text_:books in 4390) [ClassicSimilarity], result of:
          0.15863423 = score(doc=4390,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6407824 = fieldWeight in 4390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.09375 = fieldNorm(doc=4390)
      0.2 = coord(1/5)
    
    Issue
    Electronic books for WINDOWS. Ausg. I/1996.
  13. International books in print plus : [Computerdatei] (1996) 0.03
    0.03172685 = product of:
      0.15863423 = sum of:
        0.15863423 = weight(_text_:books in 6469) [ClassicSimilarity], result of:
          0.15863423 = score(doc=6469,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6407824 = fieldWeight in 6469, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.09375 = fieldNorm(doc=6469)
      0.2 = coord(1/5)
    
  14. Celli, J.: ¬The New Books Project : a prototype for re-inventing the Cataloguing-in-Publication program to meet the needs for publishers, libraries and readers in the 21st century (2001) 0.03
    0.03172685 = product of:
      0.15863423 = sum of:
        0.15863423 = weight(_text_:books in 6897) [ClassicSimilarity], result of:
          0.15863423 = score(doc=6897,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.6407824 = fieldWeight in 6897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.09375 = fieldNorm(doc=6897)
      0.2 = coord(1/5)
    
  15. Koch, C.: Can a photodiode be conscious? (2013) 0.03
    0.029912358 = product of:
      0.1495618 = sum of:
        0.1495618 = weight(_text_:books in 4560) [ClassicSimilarity], result of:
          0.1495618 = score(doc=4560,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.60413545 = fieldWeight in 4560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=4560)
      0.2 = coord(1/5)
    
    Content
    Erwiderung auf die Rezension von John Searle zu: Koch, C.: Consciousness: confessions of a romantic reductionist. Cambridge, Massachusetts: MIT Press 2012 in:The New York Review of Books, 10.01.2013 [https://www.nybooks.com/articles/2013/03/07/can-photodiode-be-conscious/?pagination=false&printpage=true]
    Source
    New York Review of Books, [https://www.nybooks.com/articles/2013/01/10/can-information-theory-explain-consciousness/]. 2013
  16. Tozer, J.: How long is the perfect book? : Bigger really is better. What the numbers say (2019) 0.03
    0.029912358 = product of:
      0.1495618 = sum of:
        0.1495618 = weight(_text_:books in 4686) [ClassicSimilarity], result of:
          0.1495618 = score(doc=4686,freq=4.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.60413545 = fieldWeight in 4686, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.0625 = fieldNorm(doc=4686)
      0.2 = coord(1/5)
    
    Abstract
    British novelist E.M. Forster once complained that long books "are usually overpraised" because "the reader wishes to convince others and himself that he has not wasted his time." To test his theory we collected reader ratings for 737 books tagged as "classic literature" on Goodreads.com, a review aggregator with 80m members. The bias towards chunky tomes was substantial. Slim volumes of 100 to 200 pages scored only 3.87 out of 5, whereas those over 1,000 pages scored 4.19. Longer is better, say the readers.
  17. Global books in print plus : complete English-language bibliographic information from the United States, United Kingdom, continental Europe, Australia, New Zealand, Africa, Asia, Latin America, Canada, and the oceanic states (1994) 0.03
    0.02643904 = product of:
      0.1321952 = sum of:
        0.1321952 = weight(_text_:books in 7837) [ClassicSimilarity], result of:
          0.1321952 = score(doc=7837,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.5339854 = fieldWeight in 7837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.078125 = fieldNorm(doc=7837)
      0.2 = coord(1/5)
    
  18. Mann, T.: Is precoordination unnecessary in LCSH? : Are Web sites more important to catalog than books?: a reference librarian's thought on the future of bibliographic control (2000) 0.03
    0.02643904 = product of:
      0.1321952 = sum of:
        0.1321952 = weight(_text_:books in 6135) [ClassicSimilarity], result of:
          0.1321952 = score(doc=6135,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.5339854 = fieldWeight in 6135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.078125 = fieldNorm(doc=6135)
      0.2 = coord(1/5)
    
  19. Graf, K.: Großer Suchmaschinentest 2021 : Alternativen zu Google? (2021) 0.03
    0.02643904 = product of:
      0.1321952 = sum of:
        0.1321952 = weight(_text_:books in 2443) [ClassicSimilarity], result of:
          0.1321952 = score(doc=2443,freq=2.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.5339854 = fieldWeight in 2443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.078125 = fieldNorm(doc=2443)
      0.2 = coord(1/5)
    
    Abstract
    Weg von Google: Die besten Suchmaschinen-Alternativen, lautet der Titel eines Artikels im Standard. Und wieder ist meine Antwort, dass es dumm und töricht wäre, für wissenschaftliche Zwecke auf eine Google-Direktabfrage zu verzichten. Insbesondere die Einbindung von Ergebnissen aus Google Books, zu dem man direkt wechseln kann, und Google Scholar, macht die Google Websuche unverzichtbar.
  20. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.03
    0.025904862 = product of:
      0.1295243 = sum of:
        0.1295243 = weight(_text_:books in 1198) [ClassicSimilarity], result of:
          0.1295243 = score(doc=1198,freq=12.0), product of:
            0.24756333 = queryWeight, product of:
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.051222645 = queryNorm
            0.52319664 = fieldWeight in 1198, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.8330836 = idf(docFreq=956, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
      0.2 = coord(1/5)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.

Years

Languages

  • e 111
  • d 91
  • el 2
  • a 1
  • nl 1
  • More… Less…

Types

  • a 99
  • i 11
  • b 6
  • m 5
  • r 5
  • s 2
  • n 1
  • x 1
  • More… Less…