Search (9613 results, page 481 of 481)

  1. Theories of information behavior (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 68) [ClassicSimilarity], result of:
              0.009831577 = score(doc=68,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 68, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=68)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    1. historisch (die Gegenwart aus der Vergangheit heraus verstehen) 2. konstruktivistisch (Individuen konstruieren unter dem Einfluss ihres sozialen Kontexts das Verständnis ihrer Welten) 3. diskursanalytisch (Sprache konstituiert die Konstruktion der Identität und die Ausbildung von Bedeutungen) 4. philosophisch-analytisch (rigorose Analyse von Begriffen und Thesen) 5. kritische Theorie (Analyse versteckter Macht- und Herrschaftsmuster) 6. ethnographisch (Verständnis von Menschen durch Hineinversetzen in deren Kulturen) 7. sozialkognitiv (sowohl das Denken des Individuums als auch dessen sozialer bzw. fachlicher Umraum beeinflussen die Informationsnutzung) 8. kognitiv (Fokus auf das Denken der Individuen im Zusammenhang mit Suche, Auffindung und Nutzung von Information) 9. bibliometrisch (statistische Eigenschaften von Information) 10. physikalisch (Signalübertragung, Informationstheorie) 11. technisch (Informationsbedürfnisse durch immer bessere Systeme und Dienste erfüllen) 12. benutzerorientierte Gestaltung ("usability", Mensch-Maschine-Interaktion) 13. evolutionär (Anwendung von Ergebnissen von Biologie und Evolutionspsychologie auf informationsbezogene Phänomene). Bates Beitrag ist, wie stets, wohldurchdacht, didaktisch gut aufbereitet und in klarer Sprache abgefasst, sodass man ihn mit Freude und Gewinn liest. Zu letzterem trägt auch noch die umfangreiche Liste von Literaturangaben bei, mit der sich insbesondere die 13 genannten Metatheorien optimal weiterverfolgen lassen. . . .
  2. Design and usability of digital libraries : case studies in the Asia-Pacific (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 93) [ClassicSimilarity], result of:
              0.009831577 = score(doc=93,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 93, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=93)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Ultimately, the book emphasizes that universal access to a worldwide digital library is the common goal among all digital library designers. Being able to view the same information, no matter what format the material is in, is one of the next steps toward reaching this goal. This book also addresses various additional problems with designing and using digital libraries, such as pricing and costs, and the range of media types that currently exist. The writing styles differ from chapter to chapter because each is written by a different set of authors. In addition, the material in the chapters is presented quite diversely. For example, in chapter 5, the methodology section of the case study is explained in the form of mathematical equations, algorithms, and charts, and chapter 13 contains complex figures and diagrams, whereas on the other hand, chapter 16 is completely written in text. Although the different ways that the case studies are presented could be considered confusing to some, the entire book remains consistent and truly comes together as a whole because the chapters are organized so sensibly. Many figures, graphs, and tables are also provided throughout the chapters to guide readers visually. Particularly helpful are the sample screen shots of digital libraries in chapter 11. Here, readers can see exactly what would be seen when viewing a digital library catalog. In general, the language and style of the book are easy to understand, and any uncommon words and phrases are always clearly defined and explained. The authors mention that the book is primarily written for academics, college students, and practitioners who may want to learn more about the design and development of digital libraries. The authors do seem to target this audience because the language and writing style seem to be geared toward members of academia, although they may represent a wide variety of disciplines. As well, computer scientists and software developers who are interested in and have been researching digital libraries will find this book useful and applicable to their current research. In conclusion, this book provides a wide variation of case studies that prove to be informative to researchers interested in the development and future progress of digital libraries. In the information world that we live in today, digital libraries are becoming more and more prominent, and the case studies presented demonstrate that the vision for the future of digital libraries is to be able to include all types of materials, cultures, and languages within a single system. All in all, this book instills value to society and all members of the academic world can learn from it."
  3. Bertram, J.: Einführung in die inhaltliche Erschließung : Grundlagen - Methoden - Instrumente (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 210) [ClassicSimilarity], result of:
              0.009831577 = score(doc=210,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 210, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=210)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das Buch beginnt mit einem Überblick über die wesentlichen Themen der Inhaltserschließung (Kap. 1). Es führt in die zentrale Problemstellung ein, die sich an die Dualität zwischen Begriffen und Bezeichnungen knüpft (Kap. 2). Danach stehen die Methoden der Inhaltserschließung im Vordergrund: Das Abstracting (Kap. 3), das Indexieren (Kap. 4) und automatische Verfahren der Inhaltserschließung (Kap. 5). Für diese Methoden werden jeweils Arbeitschritte und Qualitätskriterien benannt und es werden typologische Unterteilungen vorgenommen. Ein weiteres Kapitel ist einem häufig vernachlässigtem Produkt inhaltserschließender Tätigkeit gewidmet, dem Register (Kap. 6). Mit Dokumentationssprachen kommen dann wichtige Erschließungsinstrumente zu Wort. Nach einem Überblick (Kap. 7) geht es um ihre beiden Ausprägungen, nämlich um Klassifikationen (Kap. 8-10) und um Thesauri (Kap. 11-12). Sie werden vor allem unter dem Aspekt ihrer Erstellung thematisiert. Zudem werden Qualitätskriterien und typologische Ausformungen angesprochen. Nach einem zusammenfassenden Vergleich von Dokumentationssprachen (Kap. 13) wird mit dem Internet schließlich exemplarisch ein Anwendungsbereich vorgestellt. Dabei geht es zunächst um die Erschließung von Internetquellen ganz allgemein (Kap. 14) und dann besonders um diejenige von Fachinformationsquellen (Kap. 15). Jedes Kapitel beginnt mit einem Überblick über die wesentlichen Inhalte und die zugrunde liegende Literatur und endet mit ausgewählten bibliographischen Angaben. Diese sind gegebenenfalls mit Hinweisen auf Rezensionen versehen. Die Gesamtheit der zitierten Literatur findet sich im abschließenden Literaturverzeichnis. Grundlegende Begriffe sind gesperrt kursiv, Beispiele und Eigennamen einfach kursiv gesetzt, Pfeile (->) stellen stets Verweise auf Abbildungen oder Tabellen dar. Die angeführten Internetquellen wurden zuletzt am 11-2-2005 auf ihre Gültigkeit hin überprüft. Die vier theoretischen Kernmodule (Abstracting, Indexieren, Klassifikationen, Thesauri) werden von Übungsbeispielen flankiert, wie ich sie so oder so ähnlich in der Lehre am IID eingesetzt habe. Sie sind mit exemplarischen Lösungsvorschlägen versehen. Dabei versteht es sich von selbst, daß diese Vorschläge nur einige wenige von vielen möglichen darstellen. Mein Dank für das Zustandekommen dieser Publikation gilt zunächst den Teilnehmerinnen und Teilnehmer meiner Kurse am III). Sie haben mich durch ihre ermutigenden Rückmeldungen, ihre rege Beteiligung am Unterrichtsgeschehen und ihre kritischen Fragen immer wieder dazu motiviert, Lehrinhalte zu hinterfragen und zu präzisieren. Jutta Lindenthal hat mit wertvollen Anregungen zu diesem Buch beigetragen. Außerdem danke ich ihr für die immense Sorgfalt, Zeit und Geduld, die sie auf das Gegenlesen des Manuskripts verwandt hat, und vor allem für ihre Begeisterungsfähigkeit. Für die akribische Suche nach formalen Fehlern geht ein herzliches Dankeschön an meinen Vater. Mein Dank für Korrekturtätigkeiten gilt ferner Sabine Walz und Jan Dürrschnabel. Zum Schluß noch eine persönliche Anmerkung: Ich übernahm die Inhaltserschließung damals mit einer großen Portion Skepsis und in der Erwartung, es mit einer unendlich trockenen Materie zu tun zu bekommen. Je intensiver ich mich dann damit beschäftigte, desto größer wurde meine Begeisterung. Wenn ich meinen Lesern auch nur einen Funken davon vermitteln kann, dann wäre das für mich ein großer Erfolg.
  4. Nuovo soggettario : guida al sistema italiano di indicizzazione per soggetto, prototipo del thesaurus (2007) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 664) [ClassicSimilarity], result of:
              0.009831577 = score(doc=664,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    An entry is structured so as to present all the essential elements of the indexing system. For each term are given: category, facet, related terms, Dewey interdisciplinary class number and, if necessary; definition or scope notes. Sources used are referenced (an appendix in the book lists those used in the current work). Historical notes indicate whenever a change of term has occurred, thus smoothing the transition from the old lists. In chapter 5, the longest one, detailed instructions with practical examples show how to create entries and how to relate terms; upper relationships must always be complete, right up to the top term, whereas hierarchies of related terms not yet fully developed may remain unfinished. Subject string construction consists in a double operation: analysis and synthesis. The former is the analysis of logical functions performed by single concepts in the definition of the subject (e.g., transitive actions, object, agent, etc.) or in syntactic relationships (transitive relationships and belonging relationship), so that each term for those concepts is assigned its role (e.g., key concept, transitive element, agent, instrument, etc.) in the subject string, where the core is distinct from the complementary roles (e.g., place, time, form, etc.). Synthesis is based on a scheme of nuclear and complementary roles, and citation order follows agreed-upon principles of one-to-one relationships and logical dependence. There is no standard citation order based on facets, in a categorial logic, but a flexible one, although thorough. For example, it is possible for a time term (subdivision) to precede an action term, when the former is related to the latter as the object of action: "Arazzi - Sec. 16.-17. - Restauro" [Tapestry - 16th-17th century - Restoration] (p. 126). So, even with more complex subjects, it is possible to produce perfectly readable strings covering the whole of the subject matter without splitting it into two incomplete and complementary headings. To this end, some unusual connectives are adopted, giving the strings a more discursive style.
  5. Hofstadter, D.R.: I am a strange loop (2007) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 666) [ClassicSimilarity], result of:
              0.009831577 = score(doc=666,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Isbn
    0-465-03078-5
  6. Schwartz, D.: Einsatz und Leitbilder der Datenverarbeitung in Bibliotheken : dargestellt an ausgewählten Projekten der Deutschen Forschungsgemeinschaft (2004) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 1447) [ClassicSimilarity], result of:
              0.009831577 = score(doc=1447,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1447)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Methodisch gruppiert sich - das wurde schon deutlich - die Arbeit um Termini wie »Leitbilder«, »Modernisierung«, »Rationalisierung«, »Innovation« und will dies mit konkreten Projekten in einer »praxisbezogenen Sichtweise« (so der Klappentext) kontrastieren. Von diesem Zugang her stellt die Arbeit einen durchaus interessanten Ansatz zur historischen Würdigung der Datenverarbeitung und ihrer Rolle im Entwicklungsprozess der Bibliotheken dar. Schwartz will die »Projekte in ihrer Gesamtheit mit übergreifenden Beschreibungsmerkmalen und Entwicklungsphasen, insbesondere Ziele und Leitbilder der Bibliotheksprojekte« betrachten und herausarbeiten (S.4). Problematisch ist allerdings dabei, dass das methodische Vorgehen nicht ausreichend tief erfolgt oder auch nur einmal zusammenfassend reflektiert wird und mit der Auswahl und den ausgewählten »Quellen« selbst völlig unkritisch umgegangen wird. Weder wird klar definiert, was eigentlich unter einem »Leitbild« verstanden wird, noch, was diese Leitbilder eigentlich ausmacht oder was sie sind. Ähnliches gilt für das zugrunde gelegte Material: Werden die DFG-geförderten Projekte als Abbild der Gesamtentwicklung gesehen und wenn dies so gemeint ist, warum wird es nicht deutlich gesagt und entsprechend belegt? Eine nicht begründete affirmative Aussage zum Ende der Arbeit: »Die beschriebenen DFG-Projekte sind aufgrund ihrer thematischen Breite ein Spiegelbild der technischen Entwicklung zum Einsatz der Datenverarbeitung in Bibliotheken.« (S. 162) reicht dazu nicht aus. Wieso wird primär ein Förderprogramm (und das zeitlich teilweise parallele Förderprogramm »Informationsvermittlung in Bibliotheken«) untersucht? Wie betten sich die Aktivitäten in andere Förderaktivitäten der DFG ein und vor allem: Welchen Bezug hatte diese Förderlinie zu den Rahmenplanungen und Positionspapieren der DFG? Schlimmer noch: Einerseits wird ausdrücklich darauf verzichtet, eigene Bewertungen zu Projekten oder auch zu besprochenen Förderaktivitäten abzugeben (S.4 f), stattdessen werden »Kommentare und Bewertungen der Projekte aus vorhandenen Quellen herangezogen« (S.5). Andererseits werden diese (Selbst-)Statements aus dem DFGKontext selbst, in dünner Auswahl aber auch aus der zeitgenössischen Projektberichterstattung der Projektnehmer völlig unkritisch übernommen (S.111 f, 141 ff.). Zwar werden ausführlich Erfolgsparameter für Projekte diskutiert, andererseits aber verbiete sich jede Wertung bzw. Erfolgskontrolle (S.79 ff.). Gütekriterien werden benannt, aber extrem kurz abgehandelt: So ist etwa der Grad der Akzeptanz eines Projekts gerade einmal zwölf Zeilen wert (5.146) und wird auch später nicht weiter berücksichtigt.
  7. Mossberger, K.; Tolbert, C.J.; Stansbury, M.: Virtual inequality : beyond the digital divide (2003) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 1795) [ClassicSimilarity], result of:
              0.009831577 = score(doc=1795,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 1795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.5, S.467-468 (W. Koehler): "Virtual Inequality is an important contribution to the digital divide debate. That debate takes two basic forms. One centers an the divide between the "information rich" developed countries and the "information poor" developing countries. The second is concerned with the rift between information "haves" and "have-nots" within countries. This book addresses the latter domain and is concerned with the digital divide in the United States. This book is the product of a cross-disciplinary collaboration. Mossberger and Tolbert are both members of the Kent State University political science department while Stansbury is an the Library and Information Science faculty. The book is extremely well documented. Perhaps the chapter an the democracy divide and e-government is the best done, reflecting the political science bent of two of the authors. E-government is very well covered. Unfortunately, e-commerce and e-education go virtually unmentioned. If e-government is important to defining the digital divide, then certainly e-commerce and e-education are as well. Mossberger, Tolbert, and Stansbury argue that the digital divide should be described as four different divides: the access divide, the skills divide, the economic opportunity divide, and the democratic divide. Each of these divides is developed in its own chapter. Each chapter draws well an the existing literature. The book is valuable if for no other reason than that it provides an excellent critique of the current state of the understanding of the digital divide in the United States. It is particularly good in its contrast of the approaches taken by the Clinton and George W. Bush administrations. Perhaps this is a function of the multidisciplinary strength of the book's authorship, for indeed it shows here. The access divide is defined along "connectivity" lines: who has access to digital technologies. The authors tonfirm the conventional wisdom that age and education are important predictors of in-home access, but they also argue that rate and ethnicity are also factors (pp. 32-33): Asian Americans have greatest access followed by whites, Latinos, and African Americans in that order. Most access the Internet from home or work, followed by friends' computers, libraries, and other access points. The skills divide is defined as technical competence and information literacy (p. 38). Variation was found along technical competence for age, education, affluence, rate, and ethnicity, but not gender (p. 47). The authors conclude that for the most part the skills divide mirrors the access divide (p. 55). While they found no gender difference, they did find a gender preference for skills acquisition: males prefer a more impersonal delivery ("online help and tutorials") while females prefer more personal instruction (p. 56).
  8. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 1796) [ClassicSimilarity], result of:
              0.009831577 = score(doc=1796,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 1796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
  9. Lewandowski, D.: Web Information Retrieval : Technologien zur Informationssuche im Internet (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 3635) [ClassicSimilarity], result of:
              0.009831577 = score(doc=3635,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 3635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3635)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Inhalt: 1 Einleitung 2 Forschungsumfeld 2.1 Suchmaschinen-Markt 2.2 Formen der Suche im WWW 2.3 Aufbau algorithmischer Suchmaschinen 2.4 Abfragesprachen 2.5 Arten von Suchanfragen 2.6 Nutzerstudien 2.7 Forschungsbereiche 3 Die Größe des Web und seine Abdeckung durch Suchmaschinen 3.1 Die Größe des indexierbaren Web 3.2 Die Struktur des Web 3.3 Crawling 3.4 Aktualität der Suchmaschinen 3.5 Das Invisible Web 4 Strukturinformationen 4.1 Strukturierungsgrad von Dokumenten 4.2 Strukturinformationen in den im Web gängigen Dokumenten 4.3 Trennung von Navigation, Layout und Inhalt 4.4 Repräsentation der Dokumente in den Datenbanken der Suchmaschinen 5 Klassische Verfahren des Information Retrieval und ihre Anwendung bei WWW-Suchmaschinen 5.1 Unterschiede zwischen klassischem Information Retrieval und Web Information Retrieval 5.2 Kontrolliertes Vokabular 5.3 Kriterien für die Aufnahme in den Datenbestand 5.4 Modelle des Information Retrieval 6 Ranking 6.1 Rankingfaktoren 6.2 Messbarkeit von Relevanz 6.3 Grundsätzliche Probleme des Relevance Ranking in Suchmaschinen
  10. Morville, P.: Ambient findability : what we find changes who we become (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 312) [ClassicSimilarity], result of:
              0.009831577 = score(doc=312,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=312)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Isbn
    0-596-00765-5
  11. Andelfinger, U.: Zur Aktualität des kritischen Diskurses über Wissens- und Informationssysteme : Versuch einer Bestandsaufnahme zum 50. Ernst-Schröder-Kolloquium im Mai 2006 (2006) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 353) [ClassicSimilarity], result of:
              0.009831577 = score(doc=353,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 353, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=353)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Mitgliederbrief. ErnstSchröderZentrum. 2006, Nr.32 vom 4.3.2006, S.5-7
  12. Batley, S.: Classification in theory and practice (2005) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 1170) [ClassicSimilarity], result of:
              0.009831577 = score(doc=1170,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 1170, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1170)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book examines a core topic in traditional librarianship: classification. Classification has often been treated as a sub-set of cataloguing and indexing with relatively few basic textbooks concentrating solely an the theory and practice of classifying resources. This book attempts to redress the balance somewhat. The aim is to demystify a complex subject, by providing a sound theoretical underpinning, together with practical advice and promotion of practical skills. The text is arranged into five chapters: Chapter 1: Classification in theory and practice. This chapter explores theories of classification in broad terms and then focuses an the basic principles of library classification, introducing readers to technical terminology and different types of classification scheme. The next two chapters examine individual classification schemes in depth. Each scheme is explained using frequent examples to illustrate basic features. Working through the exercises provided should be enjoyable and will enable readers to gain practical skills in using the three most widely used general library classification schemes: Dewey Decimal Classification, Library of Congress Classification and Universal Decimal Classification. Chapter 2: Classification schemes for general collections. Dewey Decimal and Library of Congress classifications are the most useful and popular schemes for use in general libraries. The background, coverage and structure of each scheme are examined in detail in this chapter. Features of the schemes and their application are illustrated with examples. Chapter 3: Classification schemes for specialist collections. Dewey Decimal and Library of Congress may not provide sufficient depth of classification for specialist collections. In this chapter, classification schemes that cater to specialist needs are examined. Universal Decimal Classification is superficially very much like Dewey Decimal, but possesses features that make it a good choice for specialist libraries or special collections within general libraries. It is recognised that general schemes, no matter how deep their coverage, may not meet the classification needs of some collections. An answer may be to create a special classification scheme and this process is examined in detail here. Chapter 4: Classifying electronic resources. Classification has been reborn in recent years with an increasing need to organise digital information resources. A lot of work in this area has been conducted within the computer science discipline, but uses basic principles of classification and thesaurus construction. This chapter takes a broad view of theoretical and practical issues involved in creating classifications for digital resources by examining subject trees, taxonomies and ontologies. Chapter 5: Summary. This chapter provides a brief overview of concepts explored in depth in previous chapters. Development of practical skills is emphasised throughout the text. It is only through using classification schemes that a deep understanding of their structure and unique features can be gained. Although all the major schemes covered in the text are available an the Web, it is recommended that hard-copy versions are used by those wishing to become acquainted with their overall structure. Recommended readings are supplied at the end of each chapter and provide useful sources of additional information and detail. Classification demands precision and the application of analytical skills, working carefully through the examples and the practical exercises should help readers to improve these faculties. Anyone who enjoys cryptic crosswords should recognise a parallel: classification often involves taking the meaning of something apart and then reassembling it in a different way.
  13. Broughton, V.: Essential classification (2004) 0.00
    0.0024578942 = product of:
      0.0049157883 = sum of:
        0.0049157883 = product of:
          0.009831577 = sum of:
            0.009831577 = weight(_text_:5 in 2824) [ClassicSimilarity], result of:
              0.009831577 = score(doc=2824,freq=2.0), product of:
                0.15247129 = queryWeight, product of:
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.052250203 = queryNorm
                0.0644815 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9180994 = idf(docFreq=6494, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Isbn
    1-85604-514-5

Authors

Languages

Types

Themes

Subjects

Classifications