Search (792 results, page 40 of 40)

  • × type_ss:"m"
  1. Rösch, H.: Academic libraries und cyberinfrastructure in den USA : das System wissenschaftlicher Kommunikation zu Beginn des 21. Jahrhunderts (2008) 0.00
    0.0038316066 = product of:
      0.007663213 = sum of:
        0.007663213 = product of:
          0.015326426 = sum of:
            0.015326426 = weight(_text_:web in 3074) [ClassicSimilarity], result of:
              0.015326426 = score(doc=3074,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.09014259 = fieldWeight in 3074, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3074)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Aber auch die Library of Congress, so zitiert Rösch, hat im Jahr 2000 dargelegt, dass heute keine einzelne Bibliothek mehr in der Lage ist, dass Wissenschaftssystem komplett mit der benötigten Information zu versorgen. Nur ein funktional differenziertes System kann dies leisten. Ansätze dazu sieht Rösch vor allem in Formen arbeitsteiliger Bibliothekskooperation wie Global Resources Network, JSTOR, Portico, CLOCKSS oder SPARC und bescheinigt den amerikanischen Verbänden, dass sie mit Energie und Erfolg den Wandel zum funktional differenzierten System befördern. Ausführlich beschreibt der Autor die Anforderungen, die seitens der Wissenschaft an ihre Informationslogistik gestellt werden. Hier behandelt er gründlich den »Atkins-Report« von 2003 und die nachfolgenden Studien zur Cyberinfrastructure wie den »Cultural Commonwealth Report«. Bemerkenswert ist, mit welcher Klarheit bei aller Knappheit der Autor diese Entwicklungen kennzeichnet und analytisch einordnet. Er hält folgende Merkmale dieses Umbruchs in der wissenschaftlichen Kommunikation fest: - Primat der Online-Quellen und Beschleunigung - Interdisziplinarität, Kollaboration, Internationalisierung - Mengenwachstum und Informationsüberflutung - wachsende Bedeutung informeller Kommunikationsformen und nachlassende Unterscheidbarkeit von institutionell formalisierten Kommunikationsformen (Strukturverlust der Kommunikation) - Nutzung von Datenbank- und Retrievaltechnologie - Data-Mining, Aufwertung und exponentielle Vermehrung wissenschaftlicher Primärdaten - Selbst-Archivierung und Open Access - Multimedialität. Wertvoll in diesem Zusammenhang ist, dass Rösch die Tatsache diskutiert, dass Open-Access-Publikationen bei Berufungen von Professoren teilweise einfach ignoriert werden und wie sie mittels informetrischer Ansätze und Web-2.0-Funktionalitäten in formalisierte Bewertungen eingehen können.
  2. Stöcklin, N.: Wikipedia clever nutzen : in Schule und Beruf (2010) 0.00
    0.0038316066 = product of:
      0.007663213 = sum of:
        0.007663213 = product of:
          0.015326426 = sum of:
            0.015326426 = weight(_text_:web in 4531) [ClassicSimilarity], result of:
              0.015326426 = score(doc=4531,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.09014259 = fieldWeight in 4531, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4531)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Spätestens jetzt sollten die Skeptiker, auch in den Reihen der Wissenschaftler/innen und Bibliothekar/innen, nachdenklich werden und ihre Vorbehalte gegen Wikipedia überprüfen. Dabei kann das Buch von Nado Stöcklin, einem Mitarbeiter an der Pädagogischen Hochschule Bern, sehr hilfreich sein: Mich haben seine Ausführungen und Argumente überzeugt. Doch nicht nur für Zweifler ist die Lektüre dieses Buches zu empfehlen. Es gibt einen guten Überblick über Wikipedia, es ist theoretisch fundiert, praxisbezogen, leicht verständlich, dabei spannend und angenehm zu lesen - dies auch, weil jedem Kapitel ein fiktiver Dialog vorangestellt ist, in dem aus einer konkreten Situation heraus an das Thema herangeführt wird. Im ersten Kapitel geht es um die Vorteile von Wikipedia in historischer Perspektive. Wissen übersichtlich bereit zu stellen, war das Ziel vieler unterschiedlicher Bemühungen in der Antike und in mittelalterlichen Klöstern, in Bibliotheken und mit Hilfe von Enzyklopädien seit der Aufklärung: Im Vergleich dazu ist Wikipedia für alle zugänglich, jederzeit, an jedem Ort. Im zweiten Kapitel werden weitere Mehrwerte von Wikipedia festgemacht: in der Quantität der verfügbaren Informationen, in ihrer Aktualität und im demokratischen Prozess ihrer Erstellung und Redaktion. Denn eine Bedingung für den Erfolg von Wikipedia ist sicher die Software Wiki bzw. Meta-Wiki, die es erlaubt, dass Nutzer Inhalte nicht nur lesen, sondern auch selbst verändern können, wobei frühere Versionen archiviert und jederzeit wieder reaktiviert werden können. Diese Prinzipien des Web-2.0 hat Wikipedia allerdings mit vielen anderen Wiki-Projekten gemeinsam, denen diese Berühmtheit aber versagt geblieben ist - einmal abgesehen von WikiLeaks, das vor Wochen die Berichterstattung dominierte. Das wirkliche Erfolgsgeheimnis von Wikipedia liegt vielmehr in ihrer innovativen Organisation, die auf den Prinzipien Demokratie und Selbstorganisation beruht. Die Vorgaben der Gründer von Wikipedia -Jimmy Wales, ein Börsenmakler, und Larry Sanger, ein Philosophie-Dozent - waren minimalistisch: Die Beiträge sollten neutral sein, objektiv, wenn notwendig pluralistisch, nicht dogmatisch, und vor allem überprüfbar - also Qualitätskriterien, wie sie auch für wissenschaftliches Wissen gelten. Im Unterschied zum wissenschaftlichen Publikationswesen, in dem Urheberrecht und Verwertungsrechte bekanntlich restriktiv geregelt sind, geht Wikipedia aber einen anderen Weg. Alle Beiträge stehen unter der Lizenz des Creative Commons by-sa, d.h. jeder darf Inhalte kopieren und verwerten, auch kommerziell, wenn er die Autoren angibt ("by") und, sofern er sie ändert, unter dieselbe Lizenz stellt ("sa" = "share alike").
  3. Ingwersen, P.; Järvelin, K.: ¬The turn : integration of information seeking and retrieval in context (2005) 0.00
    0.0038316066 = product of:
      0.007663213 = sum of:
        0.007663213 = product of:
          0.015326426 = sum of:
            0.015326426 = weight(_text_:web in 1323) [ClassicSimilarity], result of:
              0.015326426 = score(doc=1323,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.09014259 = fieldWeight in 1323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1323)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    - Kapitel fünf enthält einen entsprechenden Überblick über die kognitive und benutzerorientierte IR-Tradition. Es zeigt, welche anderen (als nur die labororientierten) IR-Studien durchgeführt werden können, wobei sich die Betrachtung von frühen Modellen (z.B. Taylor) über Belkins ASK-Konzept bis zu Ingwersens Modell der Polyrepräsentation, und von Bates Berrypicking-Ansatz bis zu Vakkaris "taskbased" IR-Modell erstreckt. Auch Web-IR, OKAPI und Diskussionen zum Relevanzbegriff werden hier thematisiert. - Im folgenden Kapitel schlagen die Autoren ein integriertes IS&R Forschungsmodell vor, bei dem die vielfältigen Beziehungen zwischen Informationssuchenden, Systementwicklern, Oberflächen und anderen beteiligten Aspekten berücksichtigt werden. Ihr Ansatz vereint die traditionelle Laborforschung mit verschiedenen benutzerorientierten Traditionen aus IS&R, insbesondere mit den empirischen Ansätzen zu IS und zum interaktiven IR, in einem holistischen kognitiven Modell. - Kapitel sieben untersucht die Implikationen dieses Modells für IS&R, wobei besonders ins Auge fällt, wie komplex die Anfragen von Informationssuchenden im Vergleich mit der relativen Einfachheit der Algorithmen zum Auffinden relevanter Dokumente sind. Die Abbildung der vielfältig variierenden kognitiven Zustände der Anfragesteller im Rahmen der der Systementwicklung ist sicherlich keine triviale Aufgabe. Wie dabei das Problem der Einbeziehung des zentralen Aspektes der Bedeutung gelöst werden kann, sei dahingestellt. - Im achten Kapitel wird der Versuch unternommen, die zuvor diskutierten Punkte in ein IS&R-Forschungsprogramm (Prozesse - Verhalten - Systemfunktionalität - Performanz) umzusetzen, wobei auch einige kritische Anmerkungen zur bisherigen Forschungspraxis getroffen werden. - Das abschliessende neunte Kapitel fasst das Buch kurz zusammen und kann somit auch als Einstieg in dieThematik gelesen werden. Darauffolgen noch ein sehr nützliches Glossar zu allen wichtigen Begriffen, die in dem Buch Verwendung finden, eine Bibliographie und ein Sachregister. Ingwersen und Järvelin haben hier ein sehr anspruchsvolles und dennoch lesbares Buch vorgelegt. Die gebotenen Übersichtskapitel und Diskussionen sind zwar keine Einführung in die Informationswissenschaft, decken aber einen grossen Teil der heute in dieser Disziplin aktuellen und durch laufende Forschungsaktivitäten und Publikationen berührten Teilbereiche ab. Man könnte es auch - vielleicht ein wenig überspitzt - so formulieren: Was hier thematisiert wird, ist eigentlich die moderne Informationswissenschaft. Der Versuch, die beiden Forschungstraditionen zu vereinen, wird diesem Werk sicherlich einen Platz in der Geschichte der Disziplin sichern. Nicht ganz glücklich erscheint der Titel des Buches. "The Turn" soll eine Wende bedeuten, nämlich jene hin zu einer integrierten Sicht von IS und IR. Das geht vermutlich aus dem Untertitel besser hervor, doch dieser erschien den Autoren wohl zu trocken. Schade, denn "The Turn" gibt es z.B. in unserem Verbundkatalog bereits, allerdings mit dem Zusatz "from the Cold War to a new era; the United States and the Soviet Union 1983-1990". Der Verlag, der abgesehen davon ein gediegenes (wenn auch nicht gerade wohlfeiles) Produkt vorgelegt hat, hätte derlei unscharfe Duplizierend besser verhindert. Ungeachtet dessen empfehle ich dieses wichtige Buch ohne Vorbehalt zur Anschaffung; es sollte in keiner grösseren Bibliothek fehlen."
  4. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.003529319 = product of:
      0.007058638 = sum of:
        0.007058638 = product of:
          0.014117276 = sum of:
            0.014117276 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.014117276 = score(doc=1858,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  5. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.003529319 = product of:
      0.007058638 = sum of:
        0.007058638 = product of:
          0.014117276 = sum of:
            0.014117276 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.014117276 = score(doc=1789,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
  6. Ratzan, L.: Understanding information systems : what they do and why we need them (2004) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 4581) [ClassicSimilarity], result of:
              0.012261141 = score(doc=4581,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 4581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4581)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    In "Organizing Information" various fundamental organizational schemes are compared. These include hierarchical, relational, hypertext, and random access models. Each is described initially and then expanded an by listing advantages and disadvantages. This comparative format-not found elsewhere in the book-improves access to the subject and overall understanding. The author then affords considerable space to Boolean searching in the chapter "Retrieving Information." Throughout this chapter, the intricacies and problems of pattern matching and relevance are highlighted. The author elucidates the fact that document retrieval by simple pattern matching is not the same as problem solving. Therefore, "always know the nature of the problem you are trying to solve" (p. 56). This chapter is one of the more important ones in the book, covering a large topic swiftly and concisely. Chapters 5 through 11 then delve deeper into various specific issues of information systems. The chapters an securing and concealing information are exceptionally good. Without mentioning specific technologies, Mr. Ratzan is able to clearly present fundamental aspects of information security. Principles of backup security, password management, and encryption are also discussed in some detail. The latter is illustrated with some fascinating examples, from the Navajo Code Talkers to invisible ink and others. The chapters an measuring, counting, and numbering information complement each other well. Some of the more math-centric discussions and examples are found here. "Measuring Information" begins with a brief overview of bibliometrics and then moves quickly through Lotka's law, Zipf's law, and Bradford's law. For an LIS student, exposure to these topics is invaluable. Baseball statistics and web metrics are used for illustration purposes towards the end. In "counting Information," counting devices and methods are first presented, followed by discussion of the Fibonacci sequence and golden ratio. This relatively long chapter ends with examples of the tower of Hanoi, the changes of winning the lottery, and poker odds. The bulk of "Numbering Information" centers an prime numbers and pi. This chapter reads more like something out of an arithmetic book and seems somewhat extraneous here. Three specific types of information systems are presented in the second half of the book, each afforded its own chapter. These examples are universal as not to become dated or irrelevant over time. "The Computer as an Information System" is relatively short and focuses an bits, bytes, and data compression. Considering the Internet as an information system-chapter 13-is an interesting illustration. It brings up issues of IP addressing and the "privilege-vs.-right" access issue. We are reminded that the distinction between information rights and privileges is often unclear. A highlight of this chapter is the discussion of metaphors people use to describe the Internet, derived from the author's own research. He has found that people have varying mental models of the Internet, potentially affecting its perception and subsequent use.
  7. Mossberger, K.; Tolbert, C.J.; Stansbury, M.: Virtual inequality : beyond the digital divide (2003) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 1795) [ClassicSimilarity], result of:
              0.012261141 = score(doc=1795,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 1795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    The economic opportunity divide is predicated an the hypothesis that there has, indeed, been a major shift in opportunities driven by changes in the information environment. The authors document this paradigm shift well with arguments from the political and economic right and left. This chapter might be described as an "attitudinal" chapter. The authors are concerned here with the perceptions of their respondents of their information skills and skill levels with their economic outlook and opportunities. Technological skills and economic opportunities are correlated, one finds, in the minds of all across all ages, genders, races, ethnicities, and income levels. African Americans in particular are ". . attuned to the use of technology for economic opportunity" (p. 80). The fourth divide is the democratic divide. The Internet may increase political participation, the authors posit, but only among groups predisposed to participate and perhaps among those with the skills necessary to take advantage of the electronic environment (p. 86). Certainly the Web has played an important role in disseminating and distributing political messages and in some cases in political fund raising. But by the analysis here, we must conclude that the message does not reach everyone equally. Thus, the Internet may widen the political participation gap rather than narrow it. The book has one major, perhaps fatal, flaw: its methodology and statistical application. The book draws upon a survey performed for the authors in June and July 2001 by the Kent State University's Computer Assisted Telephone Interviewing (CATI) lab (pp. 7-9). CATI employed a survey protocol provided to the reader as Appendix 2. An examination of the questionnaire reveals that all questions yield either nominal or ordinal responses, including the income variable (pp. 9-10). Nevertheless, Mossberger, Tolbert, and Stansbury performed a series of multiple regression analyses (reported in a series of tables in Appendix 1) utilizing these data. Regression analysis requires interval/ratio data in order to be valid although nominal and ordinal data can be incorporated by building dichotomous dummy variables. Perhaps Mossberger, Tolbert, and Stansbury utilized dummy variables; but 1 do not find that discussed. Moreover, 1 would question a multiple regression made up completely of dichotomous dummy variables. I come away from Virtual Inequality with mixed feelings. It is useful to think of the digital divide as more than one phenomenon. The four divides that Mossberger, Tolbert, and Stansbury offeraccess, skills, economic opportunity, and democratic-are useful as a point of departure and debate. No doubt, other divides will be identified and documented. This book will lead the way. Second, without question, Mossberger, Tolbert, and Stansbury provide us with an extremely well-documented, -written, and -argued work. Third, the authors are to be commended for the multidisciplinarity of their work. Would that we could see more like it. My reservations about their methodological approach, however, hang over this review like a shroud."
  8. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 1804) [ClassicSimilarity], result of:
              0.012261141 = score(doc=1804,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    While each single paragraph of the book is packed with valuable advice and real-life experience, I consider the last chapter to be the most intriguing and ground-breaking one. It's only here that taxonomists meet folksonomists and ontologists in a fundamental attempt to write a new page on the relative position between old and emerging classification techniques. In a well-balanced and sober analysis that foregoes excessive enthusiasm in favor of more appropriate considerations about content scale, domain maturity, precision and cost, knowledge infrastructure tools are all arrayed from inexpensive and expressive folksonomies on one side, to the smart, formal, machine-readable but expensive world of ontologies on the other. In light of so many different tools, information infrastructure clearly appears more as a complex dynamic ecosystem than a static overly designed environment. Such a variety of tasks, perspectives, work activities and paradigms calls for a resilient, adaptive and flexible knowledge environment with a minimum of standardization and uniformity. The right mix of tools and approaches can only be determined case by case, by carefully considering the particular objectives and requirements of the organization while aiming to maximize its overall performance and effectiveness. Starting from the history of taxonomy-building and ending with the emerging trends in Web technologies, artificial intelligence and social computing, Organising Knowledge is thus both a guiding tool and inspirational reading, not only about taxonomies, but also about effectiveness, collaboration and finding middle ground: exactly the right principles to make your intranet, portal or document management tool a rich, evolving and long-lasting ecosystem."
  9. IFLA Cataloguing Principles : steps towards an International Cataloguing Code. Report from the 1st Meeting of Experts on an International Cataloguing Code, Frankfurt 2003 (2004) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 2312) [ClassicSimilarity], result of:
              0.012261141 = score(doc=2312,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 2312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2312)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 31(2004) no.4, S.255-257: (P. Riva): "Cataloguing standardization at the international level can be viewed as proceeding in a series of milestone conferences. This meeting, the first in a series which will cover different regions of the world, will take its place in that progression. The first IFLA Meeting of Experts an an International Cataloguing Code (IME ICC), held July 28-30, 2003 at Die Deutsche Bibliothek in Frankfurt, gathered representatives of almost all European countries as well as three of the four AACR author countries. As explained in the introduction by Barbara Tillett, chair of the IME ICC planning committee, the plan is for five meetings in total. Subsequent meetings are to take place in Buenos Aires, Argentina (held August 17-18, 2004) for Latin America and the Carribean, to be followed by Alexandria, Egypt (2005) for the Middle East, Seoul, South Korea (2006) for Asia, and Durban, South Africa (2007) for Africa. The impetus for planning these meetings was triggered by the 40th anniversary of the Paris Principles, approved at the International Conference an Cataloguing Principles held in 1961. Many will welcome the timely publication of the reports and papers from this important conference in book form. The original conference website (details given an p. 176) which includes most of the same material, is still extant, but the reports and papers gathered into this volume will be referred to by cataloguing rule makers long after the web as we know it has transformed itself into a new (and quite possibly not backwards compatible) environment.
  10. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 2924) [ClassicSimilarity], result of:
              0.012261141 = score(doc=2924,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 2924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2924)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Mitt. VÖB 60(2007) H.1, S.98-101 (O. Oberhauser): "Die Autorin von Essential thesaurus construction (and essential taxonomy construction, so der implizite Untertitel, vgl. S. 1) ist durch ihre Lehrtätigkeit an der bekannten School of Library, Archive and Information Studies des University College London und durch ihre bisherigen Publikationen auf den Gebieten (Facetten-)Klassifikation und Thesaurus fachlich einschlägig ausgewiesen. Nach Essential classification liegt nun ihr Thesaurus-Lehrbuch vor, mit rund 200 Seiten Text und knapp 100 Seiten Anhang ein handliches Werk, das seine Genese zum Grossteil dem Lehrbetrieb verdankt, wie auch dem kurzen Einleitungskapitel zu entnehmen ist. Das Buch ist der Schule von Jean Aitchison et al. verpflichtet und wendet sich an "the indexer" im weitesten Sinn, d.h. an alle Personen, die ein strukturiertes, kontrolliertes Fachvokabular für die Zwecke der sachlichen Erschliessung und Suche erstellen wollen bzw. müssen. Es möchte dieser Zielgruppe das nötige methodische Rüstzeug für eine solche Aufgabe vermitteln, was einschliesslich der Einleitung und der Schlussbemerkungen in zwanzig Kapiteln geschieht - eine ansprechende Strukturierung, die ein wohldosiertes Durcharbeiten möglich macht. Zu letzterem tragen auch die von der Autorin immer wieder gestellten Übungsaufgaben bei (Lösungen jeweils am Kapitelende). Zu Beginn der Darstellung wird der "information retrieval thesaurus" von dem (zumindest im angelsächsischen Raum) weit öfter mit dem Thesaurusbegriff assoziierten "reference thesaurus" abgegrenzt, einem nach begrifflicher Ähnlichkeit angeordneten Synonymenwörterbuch, das gerne als Mittel zur stilistischen Verbesserung beim Abfassen von (wissenschaftlichen) Arbeiten verwendet wird. Ohne noch ins Detail zu gehen, werden optische Erscheinungsform und Anwendungsgebiete von Thesauren vorgestellt, der Thesaurus als postkoordinierte Indexierungssprache erläutert und seine Nähe zu facettierten Klassifikationssystemen erwähnt. In der Folge stellt Broughton die systematisch organisierten Systeme (Klassifikation/ Taxonomie, Begriffs-/Themendiagramme, Ontologien) den alphabetisch angeordneten, wortbasierten (Schlagwortlisten, thesaurusartige Schlagwortsysteme und Thesauren im eigentlichen Sinn) gegenüber, was dem Leser weitere Einordnungshilfen schafft. Die Anwendungsmöglichkeiten von Thesauren als Mittel der Erschliessung (auch als Quelle für Metadatenangaben bei elektronischen bzw. Web-Dokumenten) und der Recherche (Suchformulierung, Anfrageerweiterung, Browsing und Navigieren) kommen ebenso zur Sprache wie die bei der Verwendung natürlichsprachiger Indexierungssysteme auftretenden Probleme. Mit Beispielen wird ausdrücklich auf die mehr oder weniger starke fachliche Spezialisierung der meisten dieser Vokabularien hingewiesen, wobei auch Informationsquellen über Thesauren (z.B. www.taxonomywarehouse.com) sowie Thesauren für nicht-textuelle Ressourcen kurz angerissen werden.
  11. Broughton, V.: Essential classification (2004) 0.00
    0.0030652853 = product of:
      0.0061305705 = sum of:
        0.0061305705 = product of:
          0.012261141 = sum of:
            0.012261141 = weight(_text_:web in 2824) [ClassicSimilarity], result of:
              0.012261141 = score(doc=2824,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.07211407 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  12. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    0.0026821247 = product of:
      0.0053642495 = sum of:
        0.0053642495 = product of:
          0.010728499 = sum of:
            0.010728499 = weight(_text_:web in 6119) [ClassicSimilarity], result of:
              0.010728499 = score(doc=6119,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.06309982 = fieldWeight in 6119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Chapter 13 an DL evaluation merges criteria from traditional library evaluation with criteria from user interface design and information retrieval. Quantitative, macro-evaluation techniques are emphasized, and again, some DL evaluation projects and reports are illustrated. A very brief chapter an the role of librarians in the DL follows, emphasizing that traditional reference skills are paramount to the success of the digital librarian, but that he should also be savvy in Web page and user interface design. A final chapter an research trends in digital libraries seems a bit incoherent. It mentions many of the previous chapters' topics, and would possibly be better organized if written as summary sections and distributed among the other chapters. The book's breadth is quite expansive, touching an both fundamental and advanced topics necessary to a well-rounded DL education. As the book is thoroughly referenced to DL and DL-related research projects, it serves as a useful starting point for those interested in more in depth learning. However, this breadth is also a weakness. In my opinion, the sheer number of research projects and papers surveyed leaves the authors little space to critique and summarize key issues. Many of the case studies are presented as itemized lists and not used to exemplify specific points. I feel that an introductory text should exercise some editorial and evaluative rights to create structure and organization for the uninitiated. Case studies should be carefully Chosen to exemplify the specific issues and differences and strengths highlighted. It is lamentable that in many of the descriptions of research projects, the authors tend to give more historical and funding Background than is necessary and miss out an giving a synthesis of the pertinent details.

Languages

  • e 429
  • d 344
  • m 9
  • es 2
  • de 1
  • f 1
  • pl 1
  • More… Less…

Types

  • s 196
  • i 23
  • el 8
  • b 2
  • d 1
  • n 1
  • r 1
  • u 1
  • x 1
  • More… Less…

Themes

Subjects

Classifications