Search (6700 results, page 335 of 335)

  1. Gumbrecht, C.: Workshop zur CJK-Katalogisierung am 18. Juni 2009 an der Staatsbibliothek zu Berlin : ein Bericht (2009) 0.00
    0.0036468431 = product of:
      0.0072936863 = sum of:
        0.0072936863 = product of:
          0.014587373 = sum of:
            0.014587373 = weight(_text_:22 in 3042) [ClassicSimilarity], result of:
              0.014587373 = score(doc=3042,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.09672529 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3042)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 10:44:16
  2. Schreiber, A.: Ars combinatoria (2010) 0.00
    0.0036468431 = product of:
      0.0072936863 = sum of:
        0.0072936863 = product of:
          0.014587373 = sum of:
            0.014587373 = weight(_text_:22 in 3976) [ClassicSimilarity], result of:
              0.014587373 = score(doc=3976,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.09672529 = fieldWeight in 3976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Kürzlich bat mich ein Anhänger der Numerologie, ihm mein Geburtsdatum zu nennen. Wiederholte Quersummenbildung ergab 4, meine ,Geburtszahl`. Auf dieselbe Weise addierte er auch die Alphabet-Positionen der Vokale in meinem Namen zu 8, meiner ,Herzzahl`. Das nennt sich Gematrie. Einer Tabelle waren dann Charakter und Schicksal zu entnehmen, soweit sie mir aus kosmischen Einflüssen vorbestimmt sind. Kein Zweifel, Okkultes braucht den großen Rahmen. Der Kosmos darf es da schon sein - oder die Pythagoräer, auf die man sich gerne beruft, weil sie Zahlen und Dinge geradezu identifiziert haben. Ich ließ meinen Gesprächspartner wissen, dass ich diesen Umgang mit Zahlen und Zeichen für spekulatives, ja abergläubisches Wunschdenken halte. "Aber Sie sind doch Mathematiker", gab er triumphierend zurück, "dann beweisen Sie mir erst einmal, dass die Numerologie nicht funktioniert!". Das, natürlich, konnte ich nicht. Als weitere Quelle geheimer Gewissheiten diente ihm die jüdische Kabbalah. Gematrische Verfahren hat sie durch kombinatorische Zeichenmanipulationen erweitert wie Zeruph (Permutation) oder Temurah (zyklisches Vertauschen). Die Welt wird als Buch vorgestellt, vom Schöpfer geschrieben mit den 22 Buchstaben des hebräischen Alphabets und den 10 dekadischen Ziffern (den "Sephiroth" eines urbildlichen Lebensbaums, mit denen Umberto Eco in seinem Roman Das Foucaultsche Pendel noch ein postmodernes Spiel treibt). Einer magischen Richtung zufolge wirken Um- und Zusammenstellungen von Buchstaben und Ziffern auf die Dinge selbst ein. So kann der Bestand verborgener Beziehungen ungehemmt wachsen. Doch "nur solche Beziehungen und Feststellungen haben objektive Bedeutung, die nicht durch irgend einen Wechsel in der Wahl der Etiketten ... beeinflußt werden". Dieses "Relativitätsprinzip" formulierte Hermann Weyl - ohne auf die Kabbalah anzuspielen - in dem Anhang Ars combinatoria zur 3. Auflage seiner Philosophie der Mathematik und Naturwissenschaft. Ihren Operationen verlieh die Kabbalah denn auch keine objektive, vielmehr eine mystische, in reiner Innenschau gewonnene Bedeutung.
  3. Haubner, S.: "Als einfacher Benutzer ist man rechtlos" : Unter den freiwilligen Wikipedia-Mitarbeitern regt sich Unmut über die Administratoren (2011) 0.00
    0.0036468431 = product of:
      0.0072936863 = sum of:
        0.0072936863 = product of:
          0.014587373 = sum of:
            0.014587373 = weight(_text_:22 in 4567) [ClassicSimilarity], result of:
              0.014587373 = score(doc=4567,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.09672529 = fieldWeight in 4567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4567)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
  4. Laaff, M.: Googles genialer Urahn (2011) 0.00
    0.0036468431 = product of:
      0.0072936863 = sum of:
        0.0072936863 = product of:
          0.014587373 = sum of:
            0.014587373 = weight(_text_:22 in 4610) [ClassicSimilarity], result of:
              0.014587373 = score(doc=4610,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.09672529 = fieldWeight in 4610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4610)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24.10.2008 14:19:22
  5. Metoyer, C.A.; Doyle, A.M.: Introduction to a speicial issue on "Indigenous Knowledge Organization" (2015) 0.00
    0.0036468431 = product of:
      0.0072936863 = sum of:
        0.0072936863 = product of:
          0.014587373 = sum of:
            0.014587373 = weight(_text_:22 in 2186) [ClassicSimilarity], result of:
              0.014587373 = score(doc=2186,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.09672529 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26. 8.2015 19:22:31
  6. Exploring artificial intelligence in the new millennium (2003) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 2099) [ClassicSimilarity], result of:
              0.01230265 = score(doc=2099,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 2099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."
  7. Gehring, P.: Vergesst den freien Willen : Über den eigentümlichen Reiz deterministischer Thesen (2005) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 3400) [ClassicSimilarity], result of:
              0.01230265 = score(doc=3400,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 3400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3400)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Janes, J.: Introduction to reference work in the digital age. (2003) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 3993) [ClassicSimilarity], result of:
              0.01230265 = score(doc=3993,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 3993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3993)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 56(2005) no.11, S.1237-1238 (E. Yakel): "This book provides the profession with a cogent, thorough, and thoughtful introduction to digital reference. Janes not only provides the breadth of coverage expected in an introduction, but also depth into this important topic. Janes' approach is managerial or administrative, providing guidelines for reference work that can be applied in different settings. Janes creates a decision-making framework to help reference librarians make decisions concerning how, to what extent, and in what cases digital reference services will be delivered. In this way, Janes avoids dictating a "one-size-fits-all" model. This approach is the major strength of the book. Library administrators and heads of reference services will find the administrative approach welcome by helping them think through which digital reference policies and methods will best target core constituencies and their institutional environments. However, the book deserves a broader audience as professors will find that the book fits nicely in a general reference course. For all readers, the book is readable and engaging and also challenging and questioning. The book begins with a history of reference work, nicely positioning digital reference in this tradition and noting the changes wrought by the digital age. By doing this, the author establishes both continuity and change in reference work as well as the values surrounding this activity. These values are largely those from the library community and Support people's access to information as well as activities that support the use of information. Janes closes this chapter by noting that the continuing changes in demographics, technology, and connectivity will impact reference work in ways that are not yet imaginable. This introduction sets the tone for the rest of the book. Janes defines digital reference service as "the use of digital technologies and resources to provide direct, professional assistance to people who are seeking information, wherever and whenever they need it" (p. 29). This definition covers a lot of ground. Examples include everything from a public library answering email queries to commercial ask-an-expert services. While the primary audience is librarians, Janes continually reminds readers that many others perform reference activities an the World Wide Web. Furthermore, he cautions readers that there are larger forces shaping this activity in the world that need to acknowledged. In building a framework for decision-making, Janes outlines the types of digital reference service. This discussion covers the communieations modes, such as e-mail, chat, Web forms, etc. It also analyzes the modalities by which reference service is delivered: synchronous/ asynchronous. Using these two dimensions (communication method and synchronous/asynchronous), Janes presents the variety of contexts in which digital reference can take place and then outlines the strengths and weaknesses of each of these. This translates into a decision-making framework by which readers analyze their particular setting and then select the modes and modalities that world be most effective. This is a powerful device and demonstrates the many options (and perhaps also the obstacles) for providing digital reference service.
  9. Mandl, T.: Tolerantes Information Retrieval : Neuronale Netze zur Erhöhung der Adaptivität und Flexibilität bei der Informationssuche (2001) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 5965) [ClassicSimilarity], result of:
              0.01230265 = score(doc=5965,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 5965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    da nun nach fast 200 Seiten der Hauptteil der Dissertation folgt - die Vorstellung und Bewertung des bereits erwähnten COSIMIR Modells. Das COSIMIR Modell "berechnet die Ähnlichkeit zwischen den zwei anliegenden Input-Vektoren" (P.194). Der Output des Netzwerks wird an einem einzigen Knoten abgegriffen, an dem sich ein sogenannten Relevanzwert einstellt, wenn die Berechnungen der Gewichtungen interner Knoten zum Abschluss kommen. Diese Gewichtungen hängen von den angelegten Inputvektoren, aus denen die Gewichte der ersten Knotenschicht ermittelt werden, und den im Netzwerk vorgegebenen Kantengewichten ab. Die Gewichtung von Kanten ist der Kernpunkt des neuronalen Ansatzes: In Analogie zum biologischen Urbild (Dendrit mit Synapsen) wächst das Gewicht der Kante mit jeder Aktivierung während einer Trainingsphase. Legt man in dieser Phase zwei Inputvektoren, z.B. Dokumentvektor und Ouery gleichzeitig mit dem Relevanzurteil als Wert des Outputknoten an, verteilen sich durch den BackpropagationProzess die Gewichte entlang der Pfade, die zwischen den beteiligten Knoten bestehen. Da alle Knoten miteinander verbunden sind, entstehen nach mehreren Trainingsbeispielen bereits deutlich unterschiedliche Kantengewichte, weil die aktiv beteiligten Kanten die Änderungen akkumulativ speichern. Eine Variation des Verfahrens benutzt das NN als "Transformationsnetzwerk", wobei die beiden Inputvektoren mit einer Dokumentrepräsentation und einem dazugehörigen Indexat (von einem Experten bereitgestellt) belegt werden. Neben der schon aufgezeigten Trainingsnotwendigkeit weisen die Neuronalen Netze eine weitere intrinsische Problematik auf: Je mehr äußere Knoten benötigt werden, desto mehr interne Kanten (und bei der Verwendung von Zwischenschichten auch Knoten) sind zu verwalten, deren Anzahl nicht linear wächst. Dieser algorithmische Befund setzt naiven Einsätzen der NN-Modelle in der Praxis schnell Grenzen, deshalb ist es umso verdienstvoller, dass der Autor einen innovativen Weg zur Lösung des Problems mit den Mitteln des IR vorschlagen kann. Er verwendet das Latent Semantic Indexing, welches Dokumentrepräsentationen aus einem hochdimensionalen Vektorraum in einen niederdimensionalen abbildet, um die Anzahl der Knoten deutlich zu reduzieren. Damit ist eine sehr schöne Synthese gelungen, welche die eingangs angedeuteten formalen Übereinstimmungen zwischen Vektorraummodellen im IR und den NN aufzeigt und ausnutzt.
  10. Intner, S.S.; Lazinger, S.S.; Weihs, J.: Metadata and its impact on libraries (2005) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 339) [ClassicSimilarity], result of:
              0.01230265 = score(doc=339,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Chapter 8 discusses issues of archiving and preserving digital materials. The chapter reiterates, "What is the point of all of this if the resources identified and catalogued are not preserved?" (Gorman, 2003, p. 16). Discussion about preservation and related issues is organized in five sections that successively ask why, what, who, how, and how much of the plethora of digital materials should be archived and preserved. These are not easy questions because of media instability and technological obsolescence. Stakeholders in communities with diverse interests compete in terms of which community or representative of a community has an authoritative say in what and how much get archived and preserved. In discussing the above-mentioned questions, the authors once again provide valuable information and lessons from a number of initiatives in Europe, Australia, and from other global initiatives. The Draft Charter on the Preservation of the Digital Heritage and the Guidelines for the Preservation of Digital Heritage, both published by UNESCO, are discussed and some of the preservation principles from the Guidelines are listed. The existing diversity in administrative arrangements for these new projects and resources notwithstanding, the impact on content produced for online reserves through work done in digital projects and from the use of metadata and the impact on levels of reference services and the ensuing need for different models to train users and staff is undeniable. In terms of education and training, formal coursework, continuing education, and informal and on-the-job training are just some of the available options. The intensity in resources required for cataloguing digital materials, the questions over the quality of digital resources, and the threat of the new digital environment to the survival of the traditional library are all issues quoted by critics and others, however, who are concerned about a balance for planning and resources allocated for traditional or print-based resources and newer digital resources. A number of questions are asked as part of the book's conclusions in Chapter 10. Of these questions, one that touches on all of the rest and upon much of the book's content is the question: What does the future hold for metadata in libraries? Metadata standards are alive and well in many communities of practice, as Chapters 2-6 have demonstrated. The usefulness of metadata continues to be high and innovation in various elements should keep information professionals engaged for decades to come. There is no doubt that metadata have had a tremendous impact in how we organize information for access and in terms of who, how, when, and where contact is made with library services and collections online. Planning and commitment to a diversity of metadata to serve the plethora of needs in communities of practice are paramount for the continued success of many digital projects and for online preservation of our digital heritage."
  11. Rogers, R.: Information politics on the Web (2004) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 442) [ClassicSimilarity], result of:
              0.01230265 = score(doc=442,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=442)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 58(2007) no.4, S.608-609 (K.D. Desouza): "Richard Rogers explores the distinctiveness of the World Wide Web as a politically contested space where information searchers may encounter multiple explanations of reality. Sources of information on the Web are in constant competition with each other for attention. The attention a source receives will determine its prominence, the ability to be a provider of leading information, and its inclusion in authoritative spaces. Rogers explores the politics behind evaluating sources that are collected and housed on authoritative spaces. Information politics on the Web can be looked at in terms of frontend or back-end politics. Front-end politics is concerned with whether sources on the Web pay attention to principles of inclusivity, fairness, and scope of representation in how information is presented, while back-end politics examines the logic behind how search engines or portals select and index information. Concerning front-end politics, Rogers questions the various versions of reality one can derive from examining information on the Web, especially when issues of information inclusivity and scope of representation are toiled with. In addition, Rogers is concerned with how back-end politics are being controlled by dominant forces of the market (i.e., the more an organization is willing to pay, the greater will be the site's visibility and prominence in authoritative spaces), regardless of whether the information presented on the site justifies such a placement. In the book, Rogers illustrates the issues involved in back-end and front-end politics (though heavily slanted on front-end politics) using vivid cases, all of which are derived from his own research. The main thrust is the exploration of how various "information instruments," defined as "a digital and analytical means of recording (capturing) and subsequently reading indications of states of defined information streams (p. 19)," help capture the politics of the Web. Rogers employs four specific instruments (Lay Decision Support System, Issue Barometer, Web Issue Index of Civil Society, and Election Issue Tracker), which are covered in detail in core chapters of the book (Chapter 2-Chapter 5). The book is comprised of six chapters, with Chapter 1 being the traditional introduction and Chapter 6 being a summary of the major concepts discussed.
  12. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 1804) [ClassicSimilarity], result of:
              0.01230265 = score(doc=1804,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 1804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Morville, P.: Ambient findability : what we find changes who we become (2005) 0.00
    0.0030756625 = product of:
      0.006151325 = sum of:
        0.006151325 = product of:
          0.01230265 = sum of:
            0.01230265 = weight(_text_:p in 312) [ClassicSimilarity], result of:
              0.01230265 = score(doc=312,freq=2.0), product of:
                0.15484701 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.04306674 = queryNorm
                0.079450354 = fieldWeight in 312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.015625 = fieldNorm(doc=312)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Hilberer, T.: Aufwand vs. Nutzen : Wie sollen deutsche wissenschaftliche Bibliotheken künftig katalogisieren? (2003) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 1733) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=1733,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 1733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1733)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2003 12:13:13
  15. Gömpel, R.; Altenhöner, R.; Kunz, M.; Oehlschläger, S.; Werner, C.: Weltkongress Bibliothek und Information, 70. IFLA-Generalkonferenz in Buenos Aires : Aus den Veranstaltungen der Division IV Bibliographic Control, der Core Activities ICABS und UNIMARC sowie der Information Technology Section (2004) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 2874) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=2874,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 2874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2874)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Libraries: Tools for Education and Development" war das Motto der 70. IFLA-Generalkonferenz, dem Weltkongress Bibliothek und Information, der vom 22.-27. August 2004 in Buenos Aires, Argentinien, und damit erstmals in Lateinamerika stattfand. Rund 3.000 Teilnehmerinnen und Teilnehmer, davon ein Drittel aus spanischsprachigen Ländern, allein 600 aus Argentinien, besuchten die von der IFLA und dem nationalen Organisationskomitee gut organisierte Tagung mit mehr als 200 Sitzungen und Veranstaltungen. Aus Deutschland waren laut Teilnehmerverzeichnis leider nur 45 Kolleginnen und Kollegen angereist, womit ihre Zahl wieder auf das Niveau von Boston gesunken ist. Erfreulicherweise gab es nunmehr bereits im dritten Jahr eine deutschsprachige Ausgabe des IFLA-Express. Auch in diesem Jahr soll hier über die Veranstaltungen der Division IV Bibliographic Control berichtet werden. Die Arbeit der Division mit ihren Sektionen Bibliography, Cataloguing, Classification and Indexing sowie der neuen Sektion Knowledge Management bildet einen der Schwerpunkte der IFLA-Arbeit, die dabei erzielten konkreten Ergebnisse und Empfehlungen haben maßgeblichen Einfluss auf die tägliche Arbeit der Bibliothekarinnen und Bibliothekare. Erstmals wird auch ausführlich über die Arbeit der Core Activities ICABS und UNIMARC und der Information Technology Section berichtet.
  16. Johannsen, J.: InetBib 2004 in Bonn : Tagungsbericht: (2005) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 3125) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=3125,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 3125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3125)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2005 19:05:37
  17. Mostafa, J.: Bessere Suchmaschinen für das Web (2006) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 4871) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=4871,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 4871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2006 18:34:49
  18. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=5580,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 5580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  19. Jörn, F.: Wie Google für uns nach der ominösen Gluonenkraft stöbert : Software-Krabbler machen sich vor der Anfrage auf die Suche - Das Netz ist etwa fünfhundertmal größer als alles Durchforschte (2001) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 3684) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=3684,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 3684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2005 9:52:00
  20. Reinartz, B.: Zwei Augen der Erkenntnis : Gehirnforscher behaupten, das bewusste Ich als Zentrum der Persönlichkeit sei nur eine raffinierte Täuschung (2002) 0.00
    0.0029174744 = product of:
      0.0058349487 = sum of:
        0.0058349487 = product of:
          0.0116698975 = sum of:
            0.0116698975 = weight(_text_:22 in 3917) [ClassicSimilarity], result of:
              0.0116698975 = score(doc=3917,freq=2.0), product of:
                0.15081239 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04306674 = queryNorm
                0.07738023 = fieldWeight in 3917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3917)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 7.1996 9:33:22

Languages

Types

Themes

Subjects

Classifications