Search (334 results, page 17 of 17)

  • × language_ss:"e"
  • × type_ss:"s"
  1. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    0.0025895115 = product of:
      0.012947557 = sum of:
        0.012947557 = product of:
          0.025895113 = sum of:
            0.025895113 = weight(_text_:aspects in 3262) [ClassicSimilarity], result of:
              0.025895113 = score(doc=3262,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.1248449 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
  2. XML in libraries (2002) 0.00
    0.0025442718 = product of:
      0.012721359 = sum of:
        0.012721359 = weight(_text_:technology in 3100) [ClassicSimilarity], result of:
          0.012721359 = score(doc=3100,freq=4.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.0930746 = fieldWeight in 3100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.2 = coord(1/5)
    
    Content
    Sammelrezension mit: (1) The ABCs of XML: The Librarian's Guide to the eXtensible Markup Language. Norman Desmarais. Houston, TX: New Technology Press, 2000. 206 pp. $28.00. (ISBN: 0-9675942-0-0) und (2) Learning XML. Erik T. Ray. Sebastopol, CA: O'Reilly & Associates, 2003. 400 pp. $34.95. (ISBN: 0-596-00420-6)
    Footnote
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  3. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.0025442718 = product of:
      0.012721359 = sum of:
        0.012721359 = weight(_text_:technology in 1796) [ClassicSimilarity], result of:
          0.012721359 = score(doc=1796,freq=4.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.0930746 = fieldWeight in 1796, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.2 = coord(1/5)
    
    Abstract
    In this collection of nearly 50 articles written by librarians, computer specialists, and other information professionals, the reader finds 10 chapters, each devoted to a problem or a side effect that has emerged since the introduction of the Internet: control over selection, survival of the book, training users, adapting to users' expectations, access issues, cost of technology, continuous retraining, legal issues, disappearing data, and how to avoid becoming blind sided. After stating a problem, each chapter offers solutions that are subsequently supported by articles. The editor's comments, which appear throughout the text, are an added bonus, as are the sections concluding the book, among them a listing of useful URLs, a works-cited section, and a comprehensive index. This book has much to recommend it, especially the articles, which are not only informative, thought-provoking, and interesting but highly readable and accessible as well. An indispensable tool for all librarians.
    Footnote
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
  4. Cataloging heresy : challenging the standard bibliographic product. Proc. of the congress for librarians, Feb.18, 1991, St. John's University, Jamaica, NY with additional contributed papers (1992) 0.00
    0.0024870026 = product of:
      0.012435013 = sum of:
        0.012435013 = product of:
          0.024870027 = sum of:
            0.024870027 = weight(_text_:22 in 7286) [ClassicSimilarity], result of:
              0.024870027 = score(doc=7286,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.15476047 = fieldWeight in 7286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=7286)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Knowledge organization 20(1993) no.2, S.100-105 (J.M. Perreault); International cataloguing and bibliographic control 22(1993) no.2, S.35 (M. Norman)
  5. Wissensspeicher in digitalen Räumen : Nachhaltigkeit, Verfügbarkeit, semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008 (2010) 0.00
    0.0024870026 = product of:
      0.012435013 = sum of:
        0.012435013 = product of:
          0.024870027 = sum of:
            0.024870027 = weight(_text_:22 in 774) [ClassicSimilarity], result of:
              0.024870027 = score(doc=774,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.15476047 = fieldWeight in 774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=774)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  6. Cross-language information retrieval (1998) 0.00
    0.0022488397 = product of:
      0.011244198 = sum of:
        0.011244198 = weight(_text_:technology in 6299) [ClassicSimilarity], result of:
          0.011244198 = score(doc=6299,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.08226709 = fieldWeight in 6299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
  7. Saving the time of the library user through subject access innovation : Papers in honor of Pauline Atherton Cochrane (2000) 0.00
    0.0021761272 = product of:
      0.010880636 = sum of:
        0.010880636 = product of:
          0.021761272 = sum of:
            0.021761272 = weight(_text_:22 in 1429) [ClassicSimilarity], result of:
              0.021761272 = score(doc=1429,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.1354154 = fieldWeight in 1429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1429)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.1997 19:16:05
  8. Managing cataloging and the organization of information : philosophies, practices and challenges at the onset of the 21st century (2000) 0.00
    0.0021761272 = product of:
      0.010880636 = sum of:
        0.010880636 = product of:
          0.021761272 = sum of:
            0.021761272 = weight(_text_:22 in 238) [ClassicSimilarity], result of:
              0.021761272 = score(doc=238,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.1354154 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=238)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in ZfBB 51(2004) H.1, S.54-55 (G. Pflug): "Unter dem wachsenden Einfluss der Informationstechnologie auf den Bibliotheksbereich nimmt die Katalogisierung eine Schlüsselstellung ein. Das vorliegende Werk gliedert sich in zwei Teile. Der erste Abschnitt ist mit »National Libraries« überschrieben, befasst sich jedoch nur mit der Library of Congress und der National Library of Canada. Ihm folgen Artikel über »Libraries around the world«. Dabei fälltjedoch auf, dass diese Studien zwar Bibliotheken in Großbritannien, Australien, Mittel- und Südamerika und selbst Afrika (Botswana) behandeln, nicht jedoch aus Kontinentaleuropa, trotz entsprechender Aktivitäten etwa in den Niederlanden, in Frankreich oder den deutschsprachigen Ländern. Nur DOBIS/LIBIS wird erwähnt, aber nur, weil es für kurze Zeit die kanadische Entwicklung beeinflusst hat. Im zweiten Teil kommen Katalogisierungsfachleute aus vier Spezial- und neun akademischen Bibliotheken - ausschließlich aus Nordamerika und Großbritannien - zu Wort. So enthält das Werk in 22 Beispielen Berichte über individuelle und regionale Lösungen. Dabei steht die Frage im Vordergrund, zu welchen Änderungen in der Katalogisierungs- und Sacherschließungspraxis die neuen elektronischen Techniken geführt haben. So streben z.B. die englischen Hochschulbibliotheken ein koordiniertes System an. Mit dem Übergang der British Library zu MARC 21 wird das Katalogsystem in Großbritannien nachhaltig beeinflusst - um nur zwei nahe liegende Beispiele zu nennen. Insgesamt werden drei Aspekte behandelt, die Automatisierungstechnik; die dabei einzusetzende Kooperation und das Outsourcing - nicht nur durch Übernahme von Daten anderer Bibliotheken oder durch Verbundsysteme, vor allem der Library of Congress, sondern auch durch Buchhandelsfirmen wie Blackwell North America Authority Control Service. In der Frage der Sacherschließung befassen sich die Beiträge mit den im amerikanischen Bereich üblichen Klassifikationssystemen, vor allem der Colon Classification, Dewey in seinen beiden Formen oder der Library of Congress Classification. Für die deutsche Diskussion sind diese Aspekte vor allem wegen des Übergangs der Deutschen Bibliothek in ihrer Nationalbibliografie zur DDC von großem Interesse (vgl. Magda Heiner-Freiling: Die DDC in der Deutschen Nationalbibliografie. In Dialog mit Bibliotheken. 15. 2003, Nr. 3, S. 8-13). Doch stellen auch die unterschiedlichen Überlegungen zur alphabetischen Katalogisierung, verbunden mit den da zugehörigen Datenbanken, einen interessanten Beitrag zur augenblicklichen Diskussion in Deutschland dar, da auch hier seit einigen Jahren die Katalogisierung nach RAK und ihre Ablösung eine lebhafte Diskussion ausgelöst hat, wie unter anderem der zusammenfassende Beitrag von Elisabeth Niggemann in: Dialog mit Bibliotheken (15. 2003, Nr. 2, S. 4-8) zeigt. Auch die angloamerikanischen und die mit ihnen zum Beispiel in Mexiko, Südamerika oder Australien verbundenen Bibliotheken - das zeigt das Buch deutlich - diskutieren die Frage der alphabetischen Katalogisierung kontrovers. So werden z.B. neben den dominanten AACR-Regeln mit ihrer Weiterentwicklung mehr als zehn andere Katalogisierungssysteme und rund 20 Online-Datenbanken behandelt. Damit liefert das Buch für die Diskussion in Deutschland und die anstehenden Entscheidungen in seiner Grundtendenz wie in den unterschiedlichen-auch widersprüchlichen-Aspekten dereinzelnen Beiträge wertvolle Anregungen."
  9. XML data management : native XML and XML-enabled database systems (2003) 0.00
    0.002071609 = product of:
      0.010358045 = sum of:
        0.010358045 = product of:
          0.02071609 = sum of:
            0.02071609 = weight(_text_:aspects in 2073) [ClassicSimilarity], result of:
              0.02071609 = score(doc=2073,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.09987592 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
  10. Wissensorganisation und Edutainment : Wissen im Spannungsfeld von Gesellschaft, Gestaltung und Industrie. Proceedings der 7. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Berlin, 21.-23.3.2001 (2004) 0.00
    0.0018652519 = product of:
      0.00932626 = sum of:
        0.00932626 = product of:
          0.01865252 = sum of:
            0.01865252 = weight(_text_:22 in 1442) [ClassicSimilarity], result of:
              0.01865252 = score(doc=1442,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.116070345 = fieldWeight in 1442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1442)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: 1. Wissensgesellschaft Michael NIEHAUS: Durch ein Meer von Unwägbarkeiten - Metaphorik in der Wissensgesellschaft S.3 Karsten WEBER: Aufgaben für eine (globale) Wissensgesellschaft oder "Welcome to the new IT? S.9 Katy TEUBENER: Chronos & Kairos. Inhaltsorganisation und Zeitkultur im Internet S.22 Klaus KRAEMER: Wissen und Nachhaltigkeit. Wissensasymmetrien als Problem einer nachhaltigen Entwicklung S.30 2. Lehre und Lernen Gehard BUDIN: Wissensorganisation als Gestaltungsprinzip virtuellen Lernens - epistemische, kommunikative und methodische Anforderungen S.39 Christan SWERTZ: Webdidaktik: Effiziente Inhaltsproduktion für netzbasierte Trainings S.49 Ingrid LOHMANN: Cognitive Mapping im Cyberpunk - Uber Postmoderne und die Transformation eines für so gut wie tot erklärten Literaturgenres zum Bildungstitel S.54 Rudolf W. KECK, Stefanie KOLLMANN, Christian RITZI: Pictura Paedagogica Online - Konzeption und Verwirklichung S.65 Jadranka LASIC-LASIC, Aida SLAVIC, Mihaela BANEK: Gemeinsame Ausbildung der IT Spezialisten an der Universität Zagreb: Vorteile und Probleme S.76 3. Informationsdesign und Visualisierung Maximilian EIBL, Thomas MANDL: Die Qualität von Visualisierungen: Eine Methode zum Vergleich zweidimensionaler Karten S.89 Udo L. FIGGE: Technische Anleitungen und der Erwerb kohärenten Wissens S.116 Monika WITSCH: Ästhetische Zeichenanalyse - eine Methode zur Analyse fundamentalistischer Agitation im Internet S.123 Oliver GERSTHEIMER, Christian LUPP: Systemdesign - Wissen um den Menschen: Bedürfnisorientierte Produktentwicklung im Mobile Business S.135 Philip ZERWECK: Mehrdimensionale Ordnungssysteme im virtuellen Raum anhand eines Desktops S.141
  11. Exploring artificial intelligence in the new millennium (2003) 0.00
    0.0017990718 = product of:
      0.008995359 = sum of:
        0.008995359 = weight(_text_:technology in 2099) [ClassicSimilarity], result of:
          0.008995359 = score(doc=2099,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.065813676 = fieldWeight in 2099, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.015625 = fieldNorm(doc=2099)
      0.2 = coord(1/5)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."
  12. Knowledge: creation, organization and use : Proceedings of the 62nd Annual Meeting of the American Society for Information Science, Washington, DC, 31.10.-4.11.1999. Ed.: Larry Woods (1999) 0.00
    0.0015543768 = product of:
      0.0077718836 = sum of:
        0.0077718836 = product of:
          0.015543767 = sum of:
            0.015543767 = weight(_text_:22 in 6721) [ClassicSimilarity], result of:
              0.015543767 = score(doc=6721,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.09672529 = fieldWeight in 6721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6721)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 6.2005 9:44:50
  13. Wissensorganisation und Verantwortung : Gesellschaftliche, ökonomische und technische Aspekte. Proceedings der 9. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Duisburg, 5.-7. November 2004 (2006) 0.00
    0.0015543768 = product of:
      0.0077718836 = sum of:
        0.0077718836 = product of:
          0.015543767 = sum of:
            0.015543767 = weight(_text_:22 in 1672) [ClassicSimilarity], result of:
              0.015543767 = score(doc=1672,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.09672529 = fieldWeight in 1672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1672)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: 1. Die Grundlagen der Wissensorganisation Ingetraut Dahlberg: Zur Begriffskultur in den Sozialwissenschaften. Evaluation einer Herausforderung S.2 Gerhard Budin: Begriffliche Wissensorganisation in den Sozialwissenschaften: Theorien und Methodenvielfalt S.12 Gerd Bauer: Die vielseitigen Anwendungsmöglichkeiten des Kategorienprinzips bei der Wissensorganisation S.22 Robert Fugmann: Die Nützlichkeit von semantischen Kategorien auf dern Gebiet der Informationsbereitstellung S.34 Gerhard Rahmtorf: Wege zur Ontologie S.37 2. Wissensordnung und Gesellschaft Raphael Beer: Ungleiches Wissen und demokratische Legitimation S.50 Elisabeth Wallnöfer Köstlin: Zum Charakter chiasmatischen Wissens S.66 Maik Adomßent: Konstitutive Elemente nachhaltiger Wissensgenerierung und -organisation S.70 Walther Umstätter: Knowledge Economy und die Privatisierung von Bibliotheken S.85 Peter Ohly: Bibliometrie in der Postmoderne S.103 Marthinus S. van der Walt: Ethics in Indexing and Classification S.115 Heike Winschiers, Jens Felder & Barbara Paterson: Nachhaltige Wissensorganisation durch kulturelle Synthese S122 3. Pädagogische Wissensorganisation Henry Milder: Knowledge related policy and civic literacy S.130 Christian Swertz: Globalisierung und Individualisierung als Bildungsziele S.140 Wolfgang David: Der Einfluss epistemologischer Überzeugungen auf Wissenserwerb S.147 Monika Witsch: Cyberlaw für den Jugendschutz - Eine pädagogische Bewertung von Internetzensur vor dem Hintergrund rechtsextremer Homepages S.152 Nicole Zillien: "Nächste Folie, bitte!" - Der Einsatz von Präsentationsprogrammen zur Wissensvermittlung und Wissensbewahrung S.159 Wolfgang Semar: Kollaborative Leistungsevaluation beim Einsatz von Wissensmanagementsystemen in der Ausbildung S.169
  14. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0012435013 = product of:
      0.0062175067 = sum of:
        0.0062175067 = product of:
          0.012435013 = sum of:
            0.012435013 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.012435013 = score(doc=1789,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    23. 3.2008 19:10:22

Languages

Types

  • m 152
  • el 5
  • a 1
  • r 1
  • More… Less…

Subjects

Classifications