Search (235 results, page 12 of 12)

  • × language_ss:"e"
  • × type_ss:"s"
  1. Benchmarks in distance education : the LIS experience (2003) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 4605) [ClassicSimilarity], result of:
          0.018942768 = score(doc=4605,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 4605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4605)
      0.2 = coord(1/5)
    
    Isbn
    1-56308-722-7
  2. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 4401) [ClassicSimilarity], result of:
          0.018942768 = score(doc=4401,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 4401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4401)
      0.2 = coord(1/5)
    
    Isbn
    0-470-84867-7
  3. Understanding FRBR : what it is and how it will affect our retrieval tools (2007) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 1675) [ClassicSimilarity], result of:
          0.018942768 = score(doc=1675,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 1675, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1675)
      0.2 = coord(1/5)
    
    Content
    1. An Introduction to Functional Requirements for Bibliographic Records (FRBR) - Arlene G. Taylor (1-20) 2. An Introduction to Functional Requirements for Authority Data (FRAD) - Glenn E. Patton (21-28) 3. Understanding the Relationship between FRBR and FRAD - Glenn E. Patton (29-34) 4. FRBR and the History of Cataloging - William Denton (35-58) 5. The Impact of Research on the Development of FRBR - Edward T. O'Neill (59-72) 6. Bibliographic Families and Superworks - Richard P. Smiraglia (73-86) 7. FRBR and RDA (Resource Description and Access) - Barbara B. Tillett (87-96) 8. FRBR and Archival Materials - Alexander C. Thurman (97-102) 9. FRBR and Works of Art, Architecture, and Material Culture - Murtha Baca and Sherman Clarke (103-110) 10. FRBR and Cartographic Materials - Mary Lynette Larsgaard (111-116) 11. FRBR and Moving Image Materials - Martha M. Yee (117-130) 12. FRBR and Music - Sherry L. Vellucci (131-152) 13. FRBR and Serials - Steven C. Shadle (153-174)
  4. Theorie, Semantik und Organisation von Wissen : Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization' (2017) 0.00
    0.0037885536 = product of:
      0.018942768 = sum of:
        0.018942768 = weight(_text_:7 in 3471) [ClassicSimilarity], result of:
          0.018942768 = score(doc=3471,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.109803796 = fieldWeight in 3471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3471)
      0.2 = coord(1/5)
    
    Content
    7. Wissenstransfer / Knowledge Transfer I. Kijeñska-D¹browska, K. Lipiec:: Knowledge Brokers as Modern Facilitators of Research Commercialization - M. Ostaszewski: Open academic community in Poland: social aspects of new scholarly communication as observed during the transformation period - M. Owigoñ, K. Weber: Knowledge and Information Management by Individuals A Report on Empirical Studies Among German Students 8. Wissenschaftsgemeinschaften / Science Communities D. Tunger: Bibliometrie: Quo vadis? - T. Möller: Woher stammt das Wissen über die Halbwertzeiten des Wissens? - M. Riechert, J. Schmitz: Qualitätssicherung von Forschungsinformationen durch visuelle Repräsentation Das Fallbeispiel des "Informationssystems Promotionsnoten" - E. Ortoll Espinet, M. Garcia Alsina: Networks of scientific collaboration in competitive intelligence studies 423
  5. Exploring artificial intelligence in the new millennium (2003) 0.00
    0.0035718828 = product of:
      0.017859414 = sum of:
        0.017859414 = weight(_text_:7 in 2099) [ClassicSimilarity], result of:
          0.017859414 = score(doc=2099,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.103524014 = fieldWeight in 2099, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=2099)
      0.2 = coord(1/5)
    
    Footnote
    In Chapter 7, Jeff Rickel and W. Lewis Johnson have created a virtual environment, with virtual humans for team training. The system is designed to allow a digital character to replace team members that may not be present. The system is also designed to allow students to acquire skills to occupy a designated role and help coordinate their activities with their teammates. The paper presents a complex concept in a very manageable fashion. In Chapter 8, Jonathan Yedidia et al. study the initial issues that make up reasoning under uncertainty. This type of reasoning, in which the system takes in facts about a patient's condition and makes predictions about the patient's future condition, is a key issue being looked at by many medical expert system developers. Their research is based an a new form of belief propagation, which is derived from generalized existing probabilistic inference methods that are widely used in AI and numerous other areas such as statistical physics. The ninth chapter, by David McAllester and Robert E. Schapire, looks at the basic problem of learning a language model. This is something that would not be challenging for most people, but can be quite arduous for a machine. The research focuses an a new technique called leave-one-out estimator that was used to investigate why statistical language models have had such success in this area of research. In Chapter 10, Peter Baumgartner looks at simplified theorem proving techniques, which have been applied very effectively in propositional logie, to first-ordered case. The author demonstrates how his new technique surpasses existing techniques in this area of AI research. The chapter simplifies a complex subject area, so that almost any reader with a basic Background in AI could understand the theorem proving. In Chapter 11, David Cohen et al. analyze complexity issues in constraint satisfaction, which is a common problem-solving paradigm. The authors lay out how tractable classes of constraint solvers create new classes that are tractable and more expressive than previous classes. This is not a chapter for an inexperienced student or researcher in AI. In Chapter 12, Jaana Kekalaine and Kalervo Jarvelin examine the question of finding the most important documents for any given query in text-based retrieval. The authors put forth two new measures of relevante and attempt to show how expanding user queries based an facets about the domain benefit retrieval. This is a great interdisciplinary chapter for readers who do not have a strong AI Background but would like to gain some insights into practical AI research. In Chapter 13, Tony Fountain et al. used machine learning techniques to help lower the tost of functional tests for ICs (integrated circuits) during the manufacturing process. The researchers used a probabilistic model of failure patterns extracted from existing data, which allowed generating of a decision-theoretic policy that is used to guide and optimize the testing of ICs. This is another great interdisciplinary chapter for a reader interested in an actual physical example of an AI system, but this chapter would require some AI knowledge.
    Isbn
    1-55860-811-7
  6. XML in libraries (2002) 0.00
    0.0035718828 = product of:
      0.017859414 = sum of:
        0.017859414 = weight(_text_:7 in 3100) [ClassicSimilarity], result of:
          0.017859414 = score(doc=3100,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.103524014 = fieldWeight in 3100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
  7. Knowledge: creation, organization and use : Proceedings of the 62nd Annual Meeting of the American Society for Information Science, Washington, DC, 31.10.-4.11.1999. Ed.: Larry Woods (1999) 0.00
    0.0035277684 = product of:
      0.017638842 = sum of:
        0.017638842 = weight(_text_:22 in 6721) [ClassicSimilarity], result of:
          0.017638842 = score(doc=6721,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.09672529 = fieldWeight in 6721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6721)
      0.2 = coord(1/5)
    
    Date
    22. 6.2005 9:44:50
  8. Software for Indexing (2003) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 2294) [ClassicSimilarity], result of:
          0.015785638 = score(doc=2294,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 2294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2294)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.115-116 (C. Jacobs): "This collection of articles by indexing practitioners, software designers and vendors is divided into five sections: Dedicated Software, Embedded Software, Online and Web Indexing Software, Database and Image Software, and Voice-activated, Automatic, and Machine-aided Software. This diversity is its strength. Part 1 is introduced by two chapters an choosing dedicated software, highlighting the issues involved and providing tips an evaluating requirements. The second chapter includes a fourteen page chart that analyzes the attributes of Authex Plus, three versions of CINDEX 1.5, MACREX 7, two versions of SKY Index (5.1 and 6.0) and wINDEX. The lasting value in this chart is its utility in making the prospective user aware of the various attributes/capabilities that are possible and that should be considered. The following chapters consist of 16 testimonials for these software packages, completed by a final chapter an specialized/customized software. The point is made that if a particular software function could increase your efficiency, it can probably be created. The chapters in Part 2, Embedded Software, go into a great deal more detail about how the programs work, and are less reviews than illustrations of functionality. Perhaps this is because they are not really stand-alones, but are functions within, or add-ons used with larger word processing or publishing programs. The software considered are Microsoft Word, FrameMaker, PageMaker, IndexTension 3.1.5 that is used with QuarkXPress, and Index Tools Professional and IXgen that are used with FrameMaker. The advantages and disadvantages of embedded indexing are made very clear, but the actual illustrations are difficult to follow if one has not worked at all with embedded software. Nonetheless, the section is valuable as it highlights issues and provides pointers an solutions to embedded indexing problems.
  9. Context: nature, impact, and role : 5th International Conference on Conceptions of Library and Information Science, CoLIS 2005, Glasgow 2005; Proceedings (2005) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 42) [ClassicSimilarity], result of:
          0.015785638 = score(doc=42,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 42, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=42)
      0.2 = coord(1/5)
    
    Footnote
    Am interessantesten und wichtigsten erschien mir der Grundsatzartikel von Peter Ingwersen und Kalervo Järvelin (Kopenhagen/Tampere), The sense of information: Understanding the cognitive conditional information concept in relation to information acquisition (S. 7-19). Hier versuchen die Autoren, den ursprünglich von Ingwersen1 vorgeschlagenen und damals ausschliesslich im Zusammenhang mit dem interaktiven Information Retrieval verwendeten Begriff "conditional cognitive information" anhand eines erweiterten Modells nicht nur auf das Gesamtgebiet von "information seeking and retrieval" (IS&R) auszuweiten, sondern auch auf den menschlichen Informationserwerb aus der Sinneswahrnehmung, wie z.B. im Alltag oder im Rahmen der wissenschaftlichen Erkenntnistätigkeit. Dabei werden auch alternative Informationsbegriffe sowie die Beziehung von Information und Bedeutung diskutiert. Einen ebenfalls auf Ingwersen zurückgehenden Ansatz thematisiert der Beitrag von Birger Larsen (Kopenhagen), indem er sich mit dessen vor über 10 Jahren veröffentlichten2 Principle of Polyrepresentation befasst. Dieses beruht auf der Hypothese, wonach die Überlappung zwischen unterschiedlichen kognitiven Repräsentationen - nämlich jenen der Situation des Informationssuchenden und der Dokumente - zur Reduktion der einer Retrievalsituation anhaftenden Unsicherheit und damit zur Verbesserung der Performance des IR-Systems genutzt werden könne. Das Prinzip stellt die Dokumente, ihre Autoren und Indexierer, aber auch die sie zugänglich machende IT-Lösung in einen umfassenden und kohärenten theoretischen Bezugsrahmen, der die benutzerorientierte Forschungsrichtung "Information-Seeking" mit der systemorientierten IR-Forschung zu integrieren trachtet. Auf der Basis theoretischer Überlegungen sowie der (wenigen) dazu vorliegenden empirischen Studien hält Larsen das Model, das von Ingwersen sowohl für "exact match-IR" als auch für "best match-IR" intendiert war, allerdings schon in seinen Grundzügen für "Boolean" (d.h. "exact match"-orientiert) und schlägt ein "polyrepresentation continuum" als Verbesserungsmöglichkeit vor.
  10. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0031571276 = product of:
      0.015785638 = sum of:
        0.015785638 = weight(_text_:7 in 636) [ClassicSimilarity], result of:
          0.015785638 = score(doc=636,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.09150316 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
  11. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.00
    0.0028222147 = product of:
      0.014111074 = sum of:
        0.014111074 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
          0.014111074 = score(doc=2047,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.07738023 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=2047)
      0.2 = coord(1/5)
    
    Date
    2. 1.2004 10:35:22
  12. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.00
    0.0028222147 = product of:
      0.014111074 = sum of:
        0.014111074 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
          0.014111074 = score(doc=3964,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.07738023 = fieldWeight in 3964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
  13. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0028222147 = product of:
      0.014111074 = sum of:
        0.014111074 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
          0.014111074 = score(doc=1789,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.07738023 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
      0.2 = coord(1/5)
    
    Date
    23. 3.2008 19:10:22
  14. Challenges in knowledge representation and organization for the 21st century : integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference, 10-13 July 2002, Granada, Spain (2003) 0.00
    0.0025257024 = product of:
      0.0126285115 = sum of:
        0.0126285115 = weight(_text_:7 in 2679) [ClassicSimilarity], result of:
          0.0126285115 = score(doc=2679,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.07320253 = fieldWeight in 2679, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=2679)
      0.2 = coord(1/5)
    
    Content
    6. Organization of Integrated Knowledge in the Electronic Environment. The Internet José Antonio SALVADOR OLIVÁN, José Maria ANGÓS ULLATE and Maria Jesús FERNÁNDEZ RUÍZ: Organization of the Information about Health Resources an the Internet; Eduardo PEIS, Antonio RUIZ, Francisco J. MUNOZ-FERNÁNDEZ and Francisco de ALBA QUINONES: Practical Method to Code Archive Findings Aids in Internet Marthinus; S. VAN DER WALT: An Integrated Model For The Organization Of Electronic Information/Knowledge in Small, Medium and Micro Enterprises (Smme's) in South Africa; Ricardo EITO BRUN: Software Development and Reuse as Knowledge Management; Practice Roberto POLI: Framing Information; 7. Models and Methods for Knowledge Organization and Conceptual Relationships Terence R. SMITH, Marcia Lei ZENG, and ADEPT Knowledge Organization Team: Structured Models of Scientific Concepts for Organizing, Accessing, and Using Learning Materials; M. OUSSALAH, F. GIRET and T. KHAMMACI: A kr Multi-hierarchies/Multi-Views Model for the Development of Complex Systems; Jonathan FURNER: A Unifying Model of Document Relatedness for Hybrid Search Engines; José Manuel BARRUECO and Vicente Julián INGLADA: Reference Linking in Economics: The Citec Project; Allyson CARLYLE and Lisa M. FUSCO: Equivalence in Tillett's Bibliographic Relationships Taxonomy: a Revision; José Antonio FRÍAS and Ana Belén RÍOS HILARIO: Visibility and Invisibility of the Kindship Relationships in Bibliographic Families of the Library Catalogue; 8. Integration of Knowledge in the Internet. Representing Knowledge in Web Sites Houssem ASSADI and Thomas BEAUVISAGE: A Comparative Study of Six French-Speaking Web Directories; Barbara H. KWASNIK: Commercial Web Sites and The Use of Classification Schemes: The Case of Amazon.Com; Jorge SERRANO COBOS and Ana M' QUINTERO ORTA: Design, Development and Management of an Information Recovery System for an Internet Website: from Documentary Theory to Practice; José Luis HERRERA MORILLAS and M' del Rosario FERNÁNDEZ FALERO: Information and Resources About Bibliographic Heritage an The Web Sites of the Spanish Universities; J.F. ALDANA, A.C. GÓMEZ, N. MORENO, A. J. NEBRO, M.M. ROLDÁN: Metadata Functionality for Semantic Web Integration; Uta PRISS: Alternatives to the "Semantic Web": Multi-Strategy Knowledge Representation; 9. Models and Methods for Knowledge Integration in Information Systems Rebecca GREEN, Carol A. BEAN and Michele HUDON: Universality And Basic Level Concepts; Grant CAMPBELL: Chronotope And Classification: How Space-Time Configurations Affect the Gathering of Industrial Statistical Data; Marianne LYKKE NIELSEN and Anna GJERLUF ESLAU: Corporate Thesauri - How to Ensure Integration of Knowledge and Reflections of Diversity; Nancy WILLIAMSON: Knowledge Integration and Classification Schemes; M.V. HURTADO, L. GARCIA and J.PARETS: Semantic Views over Heterogeneous and Distributed Data Repositories: Integration of Information System Based an Ontologies; Fernando ELICHIRIGOITY and Cheryl KNOTT MALONE: Representing the Global Economy: the North American Industry Classification System;
  15. Theories of information behavior (2005) 0.00
    0.0025257024 = product of:
      0.0126285115 = sum of:
        0.0126285115 = weight(_text_:7 in 68) [ClassicSimilarity], result of:
          0.0126285115 = score(doc=68,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.07320253 = fieldWeight in 68, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.015625 = fieldNorm(doc=68)
      0.2 = coord(1/5)
    
    Footnote
    1. historisch (die Gegenwart aus der Vergangheit heraus verstehen) 2. konstruktivistisch (Individuen konstruieren unter dem Einfluss ihres sozialen Kontexts das Verständnis ihrer Welten) 3. diskursanalytisch (Sprache konstituiert die Konstruktion der Identität und die Ausbildung von Bedeutungen) 4. philosophisch-analytisch (rigorose Analyse von Begriffen und Thesen) 5. kritische Theorie (Analyse versteckter Macht- und Herrschaftsmuster) 6. ethnographisch (Verständnis von Menschen durch Hineinversetzen in deren Kulturen) 7. sozialkognitiv (sowohl das Denken des Individuums als auch dessen sozialer bzw. fachlicher Umraum beeinflussen die Informationsnutzung) 8. kognitiv (Fokus auf das Denken der Individuen im Zusammenhang mit Suche, Auffindung und Nutzung von Information) 9. bibliometrisch (statistische Eigenschaften von Information) 10. physikalisch (Signalübertragung, Informationstheorie) 11. technisch (Informationsbedürfnisse durch immer bessere Systeme und Dienste erfüllen) 12. benutzerorientierte Gestaltung ("usability", Mensch-Maschine-Interaktion) 13. evolutionär (Anwendung von Ergebnissen von Biologie und Evolutionspsychologie auf informationsbezogene Phänomene). Bates Beitrag ist, wie stets, wohldurchdacht, didaktisch gut aufbereitet und in klarer Sprache abgefasst, sodass man ihn mit Freude und Gewinn liest. Zu letzterem trägt auch noch die umfangreiche Liste von Literaturangaben bei, mit der sich insbesondere die 13 genannten Metatheorien optimal weiterverfolgen lassen. . . .

Languages

Types

  • m 99
  • el 4
  • i 1
  • r 1
  • More… Less…

Subjects

Classifications