Search (363 results, page 18 of 19)

  • × type_ss:"s"
  1. Informationspolitik ist machbar!? : Reflexionen zum luD-Programm 1974-1911 nach 30 Jahren (2005) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 4380) [ClassicSimilarity], result of:
          0.018942768 = score(doc=4380,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 4380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4380)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis. 56(2005) H.7, S.391-392 (H. Lenk): "Der "Sputnik-Schock", der nach 1957 in vielen Ländern die wissenschaftlich-technische Information zu einem Politikgegenstand werden ließ, löste in Deutschland eine Planungswelle aus, welche auch die bis dahin nicht sehr prominente Fachinformationspolitik ergriff. Das lässt deren Evaluation aus heutiger Sicht höchst lehrreich erscheinen. Denn Informationspolitik, selbst wenn man sie auf Fachinformation beschränkt, bleibt eine heikle und kontroverse Angelegenheit. Mit der von der technischen Entwicklung getriebenen Umgestaltung des Gesamtsystems der Versorgung mit Fachinformationen werden heute Konflikte virulent, welche damals schon angelegt waren. Der Rezensent, der sich in diesem Zusammenhang vor einem Vierteljahrhundert selbst zu Wort meldete, hatte mehrfach erwogen, eine umfassende Evaluation durchzuführen, dieses Vorhaben aber immer wieder fallen gelassen. Nötig wäre eine solche Evaluation, um aus ihr zu lernen. Leider erfüllt der vorliegende Band aber nicht den Wunsch nach einer umfassenden Evaluation des IuD-Programms als des Kernstücks der Fachinformationspolitik. Zu disparat sind die einzelnen Beiträge. Was sie bieten ist Spurensicherung mit etlichen perspektivischen Einsprengseln. In der Einleitung wird darauf hingewiesen, dass der Band ursprünglich breiter angelegt war, dass jedoch das Interesse für die "alten Geschichten" der Fachinformationspolitik geschwunden und wichtige Unterlagen nicht mehr auffindbar seien.
  2. Benchmarks in distance education : the LIS experience (2003) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 4605) [ClassicSimilarity], result of:
          0.018942768 = score(doc=4605,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 4605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4605)
      0.2 = coord(1/5)
    
    Isbn
    1-56308-722-7
  3. CSCL-Kompendium : Lehr und Handbuch zum computerunterstützten kooperativen Lernen (2004) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 3972) [ClassicSimilarity], result of:
          0.018942768 = score(doc=3972,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 3972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3972)
      0.2 = coord(1/5)
    
    Footnote
    Das Kompendium schließt mit recht mutig formulierten - erfolgversprechenden - Perspektiven der Herausgeber. Es fehlt jedoch keinesfalls an der kritischen Sicht auf Defizite und-noch vorhandene -kulturelle, methodische, organisatorische und technische Barrieren. Es wird auf die Interdisziplinarität von CSCL ebenso verwiesen wie auf die nicht zuletzt daraus resultierende Heterogenität der verwendeten Konzepte und Methoden. Unter der Hauptthese "CSCL wird erwachsen!" formulieren die Herausgeber u. a. folgende Thesen für die Zukunft von CSCL: 1. Lernräume werden CSCL-fähig. 2. "CSCL-Systeme integrieren individuelle und kooperative Lernphasen. 3. Lernplattformen werden CSCL-fähig. 4. CSCL wird ins Standard-Repertoire (?) aufgenommen. 5. CSCL wird Lernprozesse flexibel unterstützen. 6. Kooperatives Lernen wird ein wichtiger Baustein für lebenslanges Lernen. 7. Lernen, Arbeiten und Spielen werden verschmelzen. 8. Es werden neuartige Nutzungsszenarien für CSCL entstehen. Bei mehr als 50 beteiligten Autoren und Autorinnen bleibt es unvermeidlich, dass man beim kritischen Lesen auch auf einzelne widersprüchliche Formulierungen und unterschiedliche Begriffsauslegungen stößt. Diese im einzelnen aufzuspüren, kann eine lohnende Aufgabe für Hochschullehrer und ihre Studierenden in Seminaren sein. Für viele CSCL-Anwender, Wissenschaftler und Studierende sowie Praktiker aus sehr unterschiedlichen Disziplinen wird das CSCL-Kompendium als Nachschlagewerk, als Lehrbuch und als Medium für das lebenslange Lernen mit und über CSCL bald unverzichtbar sein."
  4. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 4401) [ClassicSimilarity], result of:
          0.018942768 = score(doc=4401,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.2 = coord(1/5)
    
    Isbn
    0-470-84867-7
  5. Semantic web & linked data : Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis (2010) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 1516) [ClassicSimilarity], result of:
          0.018942768 = score(doc=1516,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 1516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1516)
      0.2 = coord(1/5)
    
  6. Understanding FRBR : what it is and how it will affect our retrieval tools (2007) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 1675) [ClassicSimilarity], result of:
          0.018942768 = score(doc=1675,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 1675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1675)
      0.2 = coord(1/5)
    
    Content
    1. An Introduction to Functional Requirements for Bibliographic Records (FRBR) - Arlene G. Taylor (1-20) 2. An Introduction to Functional Requirements for Authority Data (FRAD) - Glenn E. Patton (21-28) 3. Understanding the Relationship between FRBR and FRAD - Glenn E. Patton (29-34) 4. FRBR and the History of Cataloging - William Denton (35-58) 5. The Impact of Research on the Development of FRBR - Edward T. O'Neill (59-72) 6. Bibliographic Families and Superworks - Richard P. Smiraglia (73-86) 7. FRBR and RDA (Resource Description and Access) - Barbara B. Tillett (87-96) 8. FRBR and Archival Materials - Alexander C. Thurman (97-102) 9. FRBR and Works of Art, Architecture, and Material Culture - Murtha Baca and Sherman Clarke (103-110) 10. FRBR and Cartographic Materials - Mary Lynette Larsgaard (111-116) 11. FRBR and Moving Image Materials - Martha M. Yee (117-130) 12. FRBR and Music - Sherry L. Vellucci (131-152) 13. FRBR and Serials - Steven C. Shadle (153-174)
  7. Theorie, Semantik und Organisation von Wissen : Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization' (2017) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 3471) [ClassicSimilarity], result of:
          0.018942768 = score(doc=3471,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3471)
      0.2 = coord(1/5)
    
    Content
    7. Wissenstransfer / Knowledge Transfer I. Kijeñska-D¹browska, K. Lipiec:: Knowledge Brokers as Modern Facilitators of Research Commercialization - M. Ostaszewski: Open academic community in Poland: social aspects of new scholarly communication as observed during the transformation period - M. Owigoñ, K. Weber: Knowledge and Information Management by Individuals A Report on Empirical Studies Among German Students 8. Wissenschaftsgemeinschaften / Science Communities D. Tunger: Bibliometrie: Quo vadis? - T. Möller: Woher stammt das Wissen über die Halbwertzeiten des Wissens? - M. Riechert, J. Schmitz: Qualitätssicherung von Forschungsinformationen durch visuelle Repräsentation Das Fallbeispiel des "Informationssystems Promotionsnoten" - E. Ortoll Espinet, M. Garcia Alsina: Networks of scientific collaboration in competitive intelligence studies 423
  8. Exploring artificial intelligence in the new millennium (2003) 0.00
    0.0035718828 = product of:
      0.017859414 = sum of:
        0.017859414 = weight(_text_:7 in 2099) [ClassicSimilarity], result of:
          0.017859414 = score(doc=2099,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.103524014 = fieldWeight in 2099, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=2099)
      0.2 = coord(1/5)
    
    Footnote
    In Chapter 7, Jeff Rickel and W. Lewis Johnson have created a virtual environment, with virtual humans for team training. The system is designed to allow a digital character to replace team members that may not be present. The system is also designed to allow students to acquire skills to occupy a designated role and help coordinate their activities with their teammates. The paper presents a complex concept in a very manageable fashion. In Chapter 8, Jonathan Yedidia et al. study the initial issues that make up reasoning under uncertainty. This type of reasoning, in which the system takes in facts about a patient's condition and makes predictions about the patient's future condition, is a key issue being looked at by many medical expert system developers. Their research is based an a new form of belief propagation, which is derived from generalized existing probabilistic inference methods that are widely used in AI and numerous other areas such as statistical physics. The ninth chapter, by David McAllester and Robert E. Schapire, looks at the basic problem of learning a language model. This is something that would not be challenging for most people, but can be quite arduous for a machine. The research focuses an a new technique called leave-one-out estimator that was used to investigate why statistical language models have had such success in this area of research. In Chapter 10, Peter Baumgartner looks at simplified theorem proving techniques, which have been applied very effectively in propositional logie, to first-ordered case. The author demonstrates how his new technique surpasses existing techniques in this area of AI research. The chapter simplifies a complex subject area, so that almost any reader with a basic Background in AI could understand the theorem proving. In Chapter 11, David Cohen et al. analyze complexity issues in constraint satisfaction, which is a common problem-solving paradigm. The authors lay out how tractable classes of constraint solvers create new classes that are tractable and more expressive than previous classes. This is not a chapter for an inexperienced student or researcher in AI. In Chapter 12, Jaana Kekalaine and Kalervo Jarvelin examine the question of finding the most important documents for any given query in text-based retrieval. The authors put forth two new measures of relevante and attempt to show how expanding user queries based an facets about the domain benefit retrieval. This is a great interdisciplinary chapter for readers who do not have a strong AI Background but would like to gain some insights into practical AI research. In Chapter 13, Tony Fountain et al. used machine learning techniques to help lower the tost of functional tests for ICs (integrated circuits) during the manufacturing process. The researchers used a probabilistic model of failure patterns extracted from existing data, which allowed generating of a decision-theoretic policy that is used to guide and optimize the testing of ICs. This is another great interdisciplinary chapter for a reader interested in an actual physical example of an AI system, but this chapter would require some AI knowledge.
    Isbn
    1-55860-811-7
  9. XML in libraries (2002) 0.00
    0.0035718828 = product of:
      0.017859414 = sum of:
        0.017859414 = weight(_text_:7 in 3100) [ClassicSimilarity], result of:
          0.017859414 = score(doc=3100,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.103524014 = fieldWeight in 3100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  10. Information und Sprache : Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen (2006) 0.00
    0.0035718828 = product of:
      0.017859414 = sum of:
        0.017859414 = weight(_text_:7 in 91) [ClassicSimilarity], result of:
          0.017859414 = score(doc=91,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.103524014 = fieldWeight in 91, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=91)
      0.2 = coord(1/5)
    
    Content
    Jiri Panyr: Thesauri, Semantische Netze, Frames, Topic Maps, Taxonomien, Ontologien - begriffliche Verwirrung oder konzeptionelle Vielfalt? Heinz-Dieter Maas: Indexieren mit AUTINDEX Wilhelm Gaus, Rainer Kaluscha: Maschinelle inhaltliche Erschließung von Arztbriefen und Auswertung von Reha-Entlassungsberichten Klaus Lepsky: Automatische Indexierung des Reallexikons zur Deutschen Kunstgeschichte - Analysen und Entwicklungen Ilse Harms: Die computervermittelte Kommunikation als ein Instrument des Wissensmanagements in Organisationen August- Wilhelm Scheer, Dirk Werth: Geschäftsregel-basiertes Geschäftsprozessmanagement Thomas Seeger: Akkreditierung und Evaluierung von Hochschullehre und -forschung in Großbritannien. Hinweise für die Situation in Deutschland Bernd Hagenau: Gehabte Sorgen hab' ich gern? Ein Blick zurück auf die Deutschen Bibliothekartage 1975 bis 1980 - Persönliches Jorgo Chatzimarkakis: Sprache und Information in Europa Alfred Gulden: 7 Briefe und eine Anmerkung Günter Scholdt: Der Weg nach Europa im Spiegel von Mundartgedichten Alfred Guldens Wolfgang Müller: Prof. Dr. Harald H. Zimmermann - Seit 45 Jahren der Universität des Saarlandes verbunden Heinz-Dirk Luckhardt: Computerlinguistik und Informationswissenschaft: Facetten des wissenschaftlichen Wirkens von Harald H. Zimmermann Schriftenverzeichnis Harald H. Zimmermanns 1967-2005 - Projekte in Verantwortung von Harald H. Zimmermann - Adressen der Beiträgerinnen und Beiträger
    Footnote
    In Information und kulturelles Gedächtnis (S. 7-15) plädiert der Kommunikationswissenschaftler Winfried Lenders (Bonn) dafür, Information nicht mit dem zu identifizieren, was heute als (kulturelles) Gedächtnis bezeichnet wird. Information ist ein Prozess bzw. Vorgang und kein manifestes Substrat; sie setzt aber ein solches Substrat, nämlich das im (kulturellen) Gedächtnis abgespeicherte Wissen, voraus. Allerdings führt nicht jedes Informieren zu einer Vermehrung des kulturellen Gedächtnisses - das notwendige Auswahlkriterium liegt jedoch nicht in der grundsätzliche Möglichkeit zum Speichern von Inhalten. Es liegt auch nicht ausschliesslich in formalisierten Aussonderungsmechanismen wie Skartieren, Zitationsindizes und Relevanzrangreihen, sondern in der gesellschaftlichen Kommunikation schlechthin. Auch an die Verfügbarkeit des Schriftlichen ist das kulturelle Gedächtnis nicht gebunden, zumal ja auch in Kulturen der Oralität gesellschaftlich Wichtiges aufbewahrt wird. Rainer Hammwöhner (Regensburg) geht in Anmerkungen zur Grundlegung der Informationsethik (S. 17-27) zunächst auf die "Überversorgung" des Informationssektors mit Spezialethiken ein, wobei er neben der (als breiter angesehenen) Informationsethik konkurrierende Bereichsethiken wie Medienethik, Computerethik und Netzethik/Cyberethik thematisiert und Überlappungen, Abgrenzung, Hierarchisierung etc. diskutiert. Versuche einer diskursethischen wie einer normenethischen Begründung der Informationsethik sind nach Hammwöhner zum Scheitern verurteilt, sodass er einen pragmatistischen Standpunkt einnimmt, wonach Informationsethik ganz einfach "die Analyse und Systematisierung der im Zusammenhang der digitalen Kommunikation etablierten normativen Handlungsmuster" zu leisten habe. In diesem Konnex werden Fragen wie jene nach dem Guten, aber auch Aspekte wie die Bewahrung des kulturellen Erbes für spätere Generationen und der Erhalt der kulturellen Mannigfaltigkeit angesprochen. Der Beitrag des vor kurzem verstorbenen Gründungsvaters der deutschen Informationswissenschaft, Gernot Wersig (Berlin), ist mit Vereinheitlichte Medientheorie und ihre Sicht auf das Internet (S. 35-46) überschrieben. Der Autor gibt darin einen kurzen Überblick über bisherige medientheoretische Ansätze und versucht sodann - ausgehend von den Werken Niklas Luhmanns und Herbert Stachowiaks - eine "vereinheitlichte Medientheorie" zu entwickeln. Dabei werden die Faktoren Kommunikation, Medien, Medienplattformen und -typologien, Medienevolution und schließlich die digitale Revolution diskutiert. Das Internet, so folgert Wersig, sei eine Medienplattform mit dem Potential, eine ganze Epoche zu gestalten. In Anlehnung an den bekannten Begriff "Gutenberg-Galaxis" spricht er hier auch von einer "Internet-Galaxie". Obwohl dieser Artikel viele interessante Gedanken enthält, erschließt er sich dem Leser leider nur schwer, da vieles vorausgesetzt wird und auch der gewählte Soziologenjargon nicht jedermanns Sache ist.
  11. Knowledge: creation, organization and use : Proceedings of the 62nd Annual Meeting of the American Society for Information Science, Washington, DC, 31.10.-4.11.1999. Ed.: Larry Woods (1999) 0.00
    0.0035277684 = product of:
      0.017638842 = sum of:
        0.017638842 = weight(_text_:22 in 6721) [ClassicSimilarity], result of:
          0.017638842 = score(doc=6721,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.09672529 = fieldWeight in 6721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6721)
      0.2 = coord(1/5)
    
    Date
    22. 6.2005 9:44:50
  12. ¬Die Zukunft des Wissens : Vorträge und Kolloquien: XVIII. Deutscher Kongress für Philosophie, Konstanz, 4. - 8. Oktober 1999 (2000) 0.00
    0.0035277684 = product of:
      0.017638842 = sum of:
        0.017638842 = weight(_text_:22 in 733) [ClassicSimilarity], result of:
          0.017638842 = score(doc=733,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.09672529 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=733)
      0.2 = coord(1/5)
    
    Date
    22. 6.2005 15:30:21
  13. ¬Die Bibliothek zwischen Autor und Leser : 92. Deutscher Bibliothekartag in Augsburg 2002 (2003) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 2040) [ClassicSimilarity], result of:
          0.015785638 = score(doc=2040,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 2040, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2040)
      0.2 = coord(1/5)
    
    Isbn
    3-465-03252-7
  14. Software for Indexing (2003) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 2294) [ClassicSimilarity], result of:
          0.015785638 = score(doc=2294,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 2294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.115-116 (C. Jacobs): "This collection of articles by indexing practitioners, software designers and vendors is divided into five sections: Dedicated Software, Embedded Software, Online and Web Indexing Software, Database and Image Software, and Voice-activated, Automatic, and Machine-aided Software. This diversity is its strength. Part 1 is introduced by two chapters an choosing dedicated software, highlighting the issues involved and providing tips an evaluating requirements. The second chapter includes a fourteen page chart that analyzes the attributes of Authex Plus, three versions of CINDEX 1.5, MACREX 7, two versions of SKY Index (5.1 and 6.0) and wINDEX. The lasting value in this chart is its utility in making the prospective user aware of the various attributes/capabilities that are possible and that should be considered. The following chapters consist of 16 testimonials for these software packages, completed by a final chapter an specialized/customized software. The point is made that if a particular software function could increase your efficiency, it can probably be created. The chapters in Part 2, Embedded Software, go into a great deal more detail about how the programs work, and are less reviews than illustrations of functionality. Perhaps this is because they are not really stand-alones, but are functions within, or add-ons used with larger word processing or publishing programs. The software considered are Microsoft Word, FrameMaker, PageMaker, IndexTension 3.1.5 that is used with QuarkXPress, and Index Tools Professional and IXgen that are used with FrameMaker. The advantages and disadvantages of embedded indexing are made very clear, but the actual illustrations are difficult to follow if one has not worked at all with embedded software. Nonetheless, the section is valuable as it highlights issues and provides pointers an solutions to embedded indexing problems.
  15. Context: nature, impact, and role : 5th International Conference on Conceptions of Library and Information Science, CoLIS 2005, Glasgow 2005; Proceedings (2005) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 42) [ClassicSimilarity], result of:
          0.015785638 = score(doc=42,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 42, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=42)
      0.2 = coord(1/5)
    
    Footnote
    Am interessantesten und wichtigsten erschien mir der Grundsatzartikel von Peter Ingwersen und Kalervo Järvelin (Kopenhagen/Tampere), The sense of information: Understanding the cognitive conditional information concept in relation to information acquisition (S. 7-19). Hier versuchen die Autoren, den ursprünglich von Ingwersen1 vorgeschlagenen und damals ausschliesslich im Zusammenhang mit dem interaktiven Information Retrieval verwendeten Begriff "conditional cognitive information" anhand eines erweiterten Modells nicht nur auf das Gesamtgebiet von "information seeking and retrieval" (IS&R) auszuweiten, sondern auch auf den menschlichen Informationserwerb aus der Sinneswahrnehmung, wie z.B. im Alltag oder im Rahmen der wissenschaftlichen Erkenntnistätigkeit. Dabei werden auch alternative Informationsbegriffe sowie die Beziehung von Information und Bedeutung diskutiert. Einen ebenfalls auf Ingwersen zurückgehenden Ansatz thematisiert der Beitrag von Birger Larsen (Kopenhagen), indem er sich mit dessen vor über 10 Jahren veröffentlichten2 Principle of Polyrepresentation befasst. Dieses beruht auf der Hypothese, wonach die Überlappung zwischen unterschiedlichen kognitiven Repräsentationen - nämlich jenen der Situation des Informationssuchenden und der Dokumente - zur Reduktion der einer Retrievalsituation anhaftenden Unsicherheit und damit zur Verbesserung der Performance des IR-Systems genutzt werden könne. Das Prinzip stellt die Dokumente, ihre Autoren und Indexierer, aber auch die sie zugänglich machende IT-Lösung in einen umfassenden und kohärenten theoretischen Bezugsrahmen, der die benutzerorientierte Forschungsrichtung "Information-Seeking" mit der systemorientierten IR-Forschung zu integrieren trachtet. Auf der Basis theoretischer Überlegungen sowie der (wenigen) dazu vorliegenden empirischen Studien hält Larsen das Model, das von Ingwersen sowohl für "exact match-IR" als auch für "best match-IR" intendiert war, allerdings schon in seinen Grundzügen für "Boolean" (d.h. "exact match"-orientiert) und schlägt ein "polyrepresentation continuum" als Verbesserungsmöglichkeit vor.
  16. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 636) [ClassicSimilarity], result of:
          0.015785638 = score(doc=636,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
  17. Web 2.0 in der Unternehmenspraxis : Grundlagen, Fallstudien und Trends zum Einsatz von Social-Software (2009) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 2917) [ClassicSimilarity], result of:
          0.015785638 = score(doc=2917,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 2917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2917)
      0.2 = coord(1/5)
    
    Isbn
    978-3-486-58579-7
  18. Wissensgesellschaft : Neue Medien und ihre Konsequenzen (2004) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 3988) [ClassicSimilarity], result of:
          0.015785638 = score(doc=3988,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 3988, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3988)
      0.2 = coord(1/5)
    
    Isbn
    3-89331-552-7
  19. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.00
    0.0028222147 = product of:
      0.014111074 = sum of:
        0.014111074 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
          0.014111074 = score(doc=2047,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.07738023 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
      0.2 = coord(1/5)
    
    Date
    2. 1.2004 10:35:22
  20. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.00
    0.0028222147 = product of:
      0.014111074 = sum of:
        0.014111074 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
          0.014111074 = score(doc=3964,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.07738023 = fieldWeight in 3964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.

Languages

  • e 235
  • d 114
  • m 17
  • i 2
  • f 1
  • nl 1
  • More… Less…

Types

  • m 174
  • el 5
  • i 1
  • r 1
  • More… Less…

Subjects

Classifications