Search (22 results, page 1 of 2)

  • × classification_ss:"06.74 / Informationssysteme"
  1. Semantic digital libraries (2009) 0.02
    0.021683814 = product of:
      0.07589334 = sum of:
        0.040199667 = weight(_text_:networks in 3371) [ClassicSimilarity], result of:
          0.040199667 = score(doc=3371,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.2090349 = fieldWeight in 3371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=3371)
        0.035693675 = weight(_text_:standards in 3371) [ClassicSimilarity], result of:
          0.035693675 = score(doc=3371,freq=2.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.19697142 = fieldWeight in 3371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=3371)
      0.2857143 = coord(2/7)
    
    Abstract
    Libraries have always been an inspiration for the standards and technologies developed by semantic web activities. However, except for the Dublin Core specification, semantic web and social networking technologies have not been widely adopted and further developed by major digital library initiatives and projects. Yet semantic technologies offer a new level of flexibility, interoperability, and relationships for digital repositories. Kruk and McDaniel present semantic web-related aspects of current digital library activities, and introduce their functionality; they show examples ranging from general architectural descriptions to detailed usages of specific ontologies, and thus stimulate the awareness of researchers, engineers, and potential users of those technologies. Their presentation is completed by chapters on existing prototype systems such as JeromeDL, BRICKS, and Greenstone, as well as a look into the possible future of semantic digital libraries. This book is aimed at researchers and graduate students in areas like digital libraries, the semantic web, social networks, and information retrieval. This audience will benefit from detailed descriptions of both today's possibilities and also the shortcomings of applying semantic web technologies to large digital repositories of often unstructured data.
  2. Garlock, K.L.; Piontek, S.: Designing Web interfaces to library services and resources (1999) 0.02
    0.020099834 = product of:
      0.14069884 = sum of:
        0.14069884 = weight(_text_:networks in 1550) [ClassicSimilarity], result of:
          0.14069884 = score(doc=1550,freq=8.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.73162216 = fieldWeight in 1550, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1550)
      0.14285715 = coord(1/7)
    
    LCSH
    Library information networks
    Library information networks / United States
    Subject
    Library information networks
    Library information networks / United States
  3. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.02
    0.016155332 = product of:
      0.056543663 = sum of:
        0.044617094 = weight(_text_:standards in 150) [ClassicSimilarity], result of:
          0.044617094 = score(doc=150,freq=8.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.24621427 = fieldWeight in 150, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.01953125 = fieldNorm(doc=150)
        0.01192657 = product of:
          0.02385314 = sum of:
            0.02385314 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
              0.02385314 = score(doc=150,freq=6.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.16753313 = fieldWeight in 150, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
  4. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.01
    0.012479308 = product of:
      0.08735515 = sum of:
        0.08735515 = weight(_text_:government in 3185) [ClassicSimilarity], result of:
          0.08735515 = score(doc=3185,freq=2.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.37739617 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
      0.14285715 = coord(1/7)
    
    Abstract
    With this title, readers learn to push the search engine to its limits and extract the best content from Google, without having to learn complicated code. "Google Power" takes Google users under the hood, and teaches them a wide range of advanced web search techniques, through practical examples. Its content is organised by topic, so reader learns how to conduct in-depth searches on the most popular search topics, from health to government listings to people.
  5. Koch, J.H.: Unterstützung der Formierung und Analyse von virtuellen Communities (2003) 0.01
    0.012182339 = product of:
      0.08527637 = sum of:
        0.08527637 = weight(_text_:networks in 797) [ClassicSimilarity], result of:
          0.08527637 = score(doc=797,freq=4.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.44343 = fieldWeight in 797, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=797)
      0.14285715 = coord(1/7)
    
    RSWK
    Electronic villages (Computer networks) / Virtuelle Gemeinschaft / Benutzermodell / Formale Beschreibungstechnik / Komponente <Software> / Unterstutzungssystem <Informatik>
    Subject
    Electronic villages (Computer networks) / Virtuelle Gemeinschaft / Benutzermodell / Formale Beschreibungstechnik / Komponente <Software> / Unterstutzungssystem <Informatik>
  6. Farkas, M.G.: Social software in libraries : building collaboration, communication, and community online (2007) 0.01
    0.012182339 = product of:
      0.08527637 = sum of:
        0.08527637 = weight(_text_:networks in 2364) [ClassicSimilarity], result of:
          0.08527637 = score(doc=2364,freq=4.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.44343 = fieldWeight in 2364, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2364)
      0.14285715 = coord(1/7)
    
    LCSH
    Online social networks
    Subject
    Online social networks
  7. Hars, A.: From publishing to knowledge networks : reinventing online knowledge infrastructures (2003) 0.01
    0.010151949 = product of:
      0.07106364 = sum of:
        0.07106364 = weight(_text_:networks in 1634) [ClassicSimilarity], result of:
          0.07106364 = score(doc=1634,freq=4.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.369525 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1634)
      0.14285715 = coord(1/7)
    
    Abstract
    Today's publishing infrastructure is rapidly changing. As electronic journals, digital libraries, collaboratories, logic servers, and other knowledge infrastructures emerge an the internet, the key aspects of this transformation need to be identified. Knowledge is becoming increasingly dynamic and integrated. Instead of writing self-contained articles, authors are turning to the new practice of embedding their findings into dynamic networks of knowledge. Here, the author details the implications that this transformation is having an the creation, dissemination and organization of academic knowledge. The author Shows that many established publishing principles need to be given up in order to facilitate this transformation. The text provides valuable insights for knowledge managers, designers of internet-based knowledge infrastructures, and professionals in the publishing industry. Researchers will find the scenarios and implications for research processes stimulating and thought-provoking.
  8. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.009893437 = product of:
      0.034627028 = sum of:
        0.029118383 = weight(_text_:government in 1789) [ClassicSimilarity], result of:
          0.029118383 = score(doc=1789,freq=2.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.12579872 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.0055086464 = product of:
          0.011017293 = sum of:
            0.011017293 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.011017293 = score(doc=1789,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
  9. Levy, S.: In the plex : how Google thinks, works, and shapes our lives (2011) 0.01
    0.0072795963 = product of:
      0.050957173 = sum of:
        0.050957173 = weight(_text_:government in 9) [ClassicSimilarity], result of:
          0.050957173 = score(doc=9,freq=2.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.22014776 = fieldWeight in 9, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.02734375 = fieldNorm(doc=9)
      0.14285715 = coord(1/7)
    
    Content
    The world according to Google: biography of a search engine -- Googlenomics: cracking the code on internet profits -- Don't be evil: how Google built its culture -- Google's cloud: how Google built data centers and killed the hard drive -- Outside the box: the Google phone company. and the Google t.v. company -- Guge: Google moral dilemma in China -- Google.gov: is what's good for Google, good for government or the public? -- Epilogue: chasing tail lights: trying to crack the social code.
  10. Widhalm, R.; Mück, T.: Topic maps : Semantische Suche im Internet (2002) 0.01
    0.005099097 = product of:
      0.035693675 = sum of:
        0.035693675 = weight(_text_:standards in 4731) [ClassicSimilarity], result of:
          0.035693675 = score(doc=4731,freq=2.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.19697142 = fieldWeight in 4731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=4731)
      0.14285715 = coord(1/7)
    
    Content
    Topic Maps - Einführung in den ISO Standard (Topics, Associations, Scopes, Facets, Topic Maps).- Grundlagen von XML (Aufbau, Bestandteile, Element- und Attributdefinitionen, DTD, XLink, XPointer).- Wie entsteht ein Heringsschmaus? Konkretes Beispiel einer Topic Map.Topic Maps - Meta DTD. Die formale Beschreibung des Standards.- HyTime als zugrunde liegender Formalismus (Bounded Object Sets, Location Addressing, Hyperlinks in HyTime).- Prototyp eines Topic Map Repositories (Entwicklungsprozess für Topic Maps, Prototyp Spezifikation, technische Realisierung des Prototyps).- Semantisches Datenmodell zur Speicherung von Topic Maps.- Prototypische Abfragesprache für Topic Maps.- Erweiterungsvorschläge für den ISO Standard.
  11. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0045070075 = product of:
      0.03154905 = sum of:
        0.03154905 = weight(_text_:standards in 636) [ClassicSimilarity], result of:
          0.03154905 = score(doc=636,freq=4.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.17409979 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.14285715 = coord(1/7)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  12. Information und Wissen : global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011 (2010) 0.00
    0.003589256 = product of:
      0.02512479 = sum of:
        0.02512479 = weight(_text_:networks in 5190) [ClassicSimilarity], result of:
          0.02512479 = score(doc=5190,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.13064681 = fieldWeight in 5190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.01953125 = fieldNorm(doc=5190)
      0.14285715 = coord(1/7)
    
    Content
    Inhalt: - Keynotes Kalervo Järvelin: Information Retrieval: Technology, Evaluation and Beyond Michael Schroeder: Semantic search for the life sciences - Evaluation Pavel Sirotkin: Predicting user preferences Hanmin Jung, Mikyoung Lee, Won-Kyung Sung, Do Wan Kim: Usefiilness Evaluation on Visualization of Researcher Networks Jens Kürsten, Thomas Wilhelm, Maximilian Eibl: Vergleich von IR-Systemkonfigurationen auf Komponentenebene - Informationsinfrastruktur Reinhild Barkey, Erhard Hinrichs, Christina Hoppermann, Thorsten Trippel, Claus Zinn: Komponenten-basierte Metadatenschemata und Facetten-basierte Suche Ina Dehnhard, Peter Weiland: Toolbasierte Datendokumentation in der Psychologie Gertrud Faaß, Ulrich Heid: Nachhaltige Dokumentation virtueller Forschungsumgebungen - Soziale Software Evelyn Droge, Parinaz Maghferat, Cornelius Puschmann, Julia Verbina, Katrin Weller: Konferenz-Tweets Richard Heinen, Ingo Blees: Social Bookmarking als Werkzeug für die Kooperation von Lehrkräften Jens Terliesner, Isabella Peters: Der T-Index als Stabilitätsindikator für dokument-spezifische Tag-Verteilungen
  13. Jeanneney, J.-N.: Googles Herausforderung : Für eine europäische Bibliothek (2006) 0.00
    0.0025495484 = product of:
      0.017846838 = sum of:
        0.017846838 = weight(_text_:standards in 46) [ClassicSimilarity], result of:
          0.017846838 = score(doc=46,freq=2.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.09848571 = fieldWeight in 46, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.015625 = fieldNorm(doc=46)
      0.14285715 = coord(1/7)
    
    Footnote
    Es empfiehlt sich, an die Google-Vorhaben mit einer gehörigen Portion Unvoreingenommenheit heranzutreten und von einem Projekt, das noch in den Kinderschuhen steckt, keine Wunderdinge zu erwarten; unbestreitbare Leistungen aber auch als solche würdigend anzuerkennen. ... Das in Digitalisierungsfragen noch immer schläfrige, wenn nicht gar schlafende Europa, ist zweifellos zunächst von Google geweckt und anschließend von Jeanneney alarmiert worden. Jeanneney hat aus einem zunächst harmlos anmutenden privatwirtschaftlichen Vorhaben ein Politikum gemacht; dass er hierbei immer wieder über sein hehres Ziel einer europäischen Gegenoffensive hinausschießt, kann die Debatte nur beleben. Er wendet sich gegen den neoliberalen Glauben, die Kräfte des freien kapitalistischen Marktes seien in der Lage, allen Seiten gerecht zu werden, und fordert eine Dominanz des staatlichen Sektors, der zumindest komplementär tätig werden müsse, um die Auswüchse von Google gegensteuernd zu bremsen. Dort, wo Jeanneney die antiamerikanische Schelte verlässt und die europäische Antwort skizziert, zeigen sich seine Stärken. Google arbeitet zwar mit Bibliotheken zusammen, ob die Intensität dieser Kooperation aber ausreichend hoch ist, um bewährte bibliothekarische Standards verwirklichen zu können, ist zumindest fraglich. Die >Suchmaske> erlaubt keine spezifizierenden Anfragen; die formale Erschließung der digitalisierten Werke ist völlig unzureichend; eine inhaltliche Erschließung existiert nicht. Hier könnten die europäischen Bibliothekare in der Tat ihre spezifischen Kenntnisse einbringen und statt der von Jeanneney kritisierten Bereitstellung »zusammenhangsloser] Wissensfragmente« (S.14) eine Anreicherung der Digitalisate mit Metadaten anbieten, die das Datenmeer filtert. Wer aber - in der Bibliothekslandschaft sicherlich unstrittig - die exakte Katalogisierung der Digitalisate und ihre Einbindung in Bibliothekskataloge wünscht, damit die Bücher nicht nur über Google, sondern auch über die Portale und Katalogverbünde zugänglich sind, sollte auf Google zugehen, anstatt Google zu provozieren.
  14. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.0025495484 = product of:
      0.017846838 = sum of:
        0.017846838 = weight(_text_:standards in 1796) [ClassicSimilarity], result of:
          0.017846838 = score(doc=1796,freq=2.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.09848571 = fieldWeight in 1796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.14285715 = coord(1/7)
    
    Footnote
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
  15. Broughton, V.: Essential thesaurus construction (2006) 0.00
    0.0025495484 = product of:
      0.017846838 = sum of:
        0.017846838 = weight(_text_:standards in 2924) [ClassicSimilarity], result of:
          0.017846838 = score(doc=2924,freq=2.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.09848571 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.14285715 = coord(1/7)
    
    Abstract
    Many information professionals working in small units today fail to find the published tools for subject-based organization that are appropriate to their local needs, whether they are archivists, special librarians, information officers, or knowledge or content managers. Large established standards for document description and organization are too unwieldy, unnecessarily detailed, or too expensive to install and maintain. In other cases the available systems are insufficient for a specialist environment, or don't bring things together in a helpful way. A purpose built, in-house system would seem to be the answer, but too often the skills necessary to create one are lacking. This practical text examines the criteria relevant to the selection of a subject-management system, describes the characteristics of some common types of subject tool, and takes the novice step by step through the process of creating a system for a specialist environment. The methodology employed is a standard technique for the building of a thesaurus that incidentally creates a compatible classification or taxonomy, both of which may be used in a variety of ways for document or information management. Key areas covered are: What is a thesaurus? Tools for subject access and retrieval; what a thesaurus is used for? Why use a thesaurus? Examples of thesauri; the structure of a thesaurus; thesaural relationships; practical thesaurus construction; the vocabulary of the thesaurus; building the systematic structure; conversion to alphabetic format; forms of entry in the thesaurus; maintaining the thesaurus; thesaurus software; and; the wider environment. Essential for the practising information professional, this guide is also valuable for students of library and information science.
  16. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.00
    0.0023608485 = product of:
      0.01652594 = sum of:
        0.01652594 = product of:
          0.03305188 = sum of:
            0.03305188 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.03305188 = score(doc=1781,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2008 14:35:21
  17. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.00
    0.0019673738 = product of:
      0.013771616 = sum of:
        0.013771616 = product of:
          0.027543232 = sum of:
            0.027543232 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.027543232 = score(doc=1397,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2008 14:29:25
  18. Bleuel, J.: Online Publizieren im Internet : elektronische Zeitschriften und Bücher (1995) 0.00
    0.0019673738 = product of:
      0.013771616 = sum of:
        0.013771616 = product of:
          0.027543232 = sum of:
            0.027543232 = weight(_text_:22 in 1708) [ClassicSimilarity], result of:
              0.027543232 = score(doc=1708,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.19345059 = fieldWeight in 1708, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1708)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2008 16:15:37
  19. Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : July 14 - 18, 2002, Portland, Oregon, USA. (2002) 0.00
    0.001844945 = product of:
      0.012914615 = sum of:
        0.012914615 = product of:
          0.02582923 = sum of:
            0.02582923 = weight(_text_:policy in 172) [ClassicSimilarity], result of:
              0.02582923 = score(doc=172,freq=2.0), product of:
                0.21800333 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.04065836 = queryNorm
                0.11848089 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.015625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    SESSION: Digital libraries for spatial data The ADEPT digital library architecture (Greg Janée, James Frew) - G-Portal: a map-based digital library for distributed geospatial and georeferenced resources (Ee-Peng Lim, Dion Hoe-Lian Goh, Zehua Liu, Wee-Keong Ng, Christopher Soo-Guan Khoo, Susan Ellen Higgins) PANEL SESSION: Panels You mean I have to do what with whom: statewide museum/library DIGI collaborative digitization projects---the experiences of California, Colorado & North Carolina (Nancy Allen, Liz Bishoff, Robin Chandler, Kevin Cherry) - Overcoming impediments to effective health and biomedical digital libraries (William Hersh, Jan Velterop, Alexa McCray, Gunther Eynsenbach, Mark Boguski) - The challenges of statistical digital libraries (Cathryn Dippo, Patricia Cruse, Ann Green, Carol Hert) - Biodiversity and biocomplexity informatics: policy and implementation science versus citizen science (P. Bryan Heidorn) - Panel on digital preservation (Joyce Ray, Robin Dale, Reagan Moore, Vicky Reich, William Underwood, Alexa T. McCray) - NSDL: from prototype to production to transformational national resource (William Y. Arms, Edward Fox, Jeanne Narum, Ellen Hoffman) - How important is metadata? (Hector Garcia-Molina, Diane Hillmann, Carl Lagoze, Elizabeth Liddy, Stuart Weibel) - Planning for future digital libraries programs (Stephen M. Griffin) DEMONSTRATION SESSION: Demonstrations u.a.: FACET: thesaurus retrieval with semantic term expansion (Douglas Tudhope, Ceri Binding, Dorothee Blocks, Daniel Cunliffe) - MedTextus: an intelligent web-based medical meta-search system (Bin Zhu, Gondy Leroy, Hsinchun Chen, Yongchi Chen) POSTER SESSION: Posters TUTORIAL SESSION: Tutorials u.a.: Thesauri and ontologies in digital libraries: 1. structure and use in knowledge-based assistance to users (Dagobert Soergel) - How to build a digital library using open-source software (Ian H. Witten) - Thesauri and ontologies in digital libraries: 2. design, evaluation, and development (Dagobert Soergel) WORKSHOP SESSION: Workshops Document search interface design for large-scale collections and intelligent access (Javed Mostafa) - Visual interfaces to digital libraries (Katy Börner, Chaomei Chen) - Text retrieval conference (TREC) genomics pre-track workshop (William Hersh)
  20. Medienkompetenz : wie lehrt und lernt man Medienkompetenz? (2003) 0.00
    0.0015738991 = product of:
      0.011017293 = sum of:
        0.011017293 = product of:
          0.022034585 = sum of:
            0.022034585 = weight(_text_:22 in 2249) [ClassicSimilarity], result of:
              0.022034585 = score(doc=2249,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.15476047 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2008 18:05:16

Languages

  • e 14
  • d 7

Types

  • m 22
  • s 9

Subjects

Classifications