Search (1019 results, page 51 of 51)

  • × language_ss:"e"
  • × type_ss:"s"
  1. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 691) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=691,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 691, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XI,568 S
    Type
    s
  2. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 3578) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=3578,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 3578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    680 S.
    Type
    s
  3. ¬Die Macht der Suchmaschinen (2007) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 1813) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=1813,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 1813, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1813)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    350 S
    Type
    s
  4. Knull-Schlomann, Kristina (Red.): New pespectives on subject indexing and classification : essays in honour of Magda Heiner-Freiling (2008) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 2146) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=2146,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 2146, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2146)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    333 S
    Type
    s
  5. Towards the Semantic Web : ontology-driven knowledge management (2004) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 4401) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=4401,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 4401, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4401)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XX, 288 S
    Type
    s
  6. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 468) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=468,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    105 S
    Type
    s
  7. Understanding FRBR : what it is and how it will affect our retrieval tools (2007) 0.00
    6.381033E-4 = product of:
      0.0012762066 = sum of:
        0.0012762066 = product of:
          0.0025524131 = sum of:
            0.0025524131 = weight(_text_:s in 1675) [ClassicSimilarity], result of:
              0.0025524131 = score(doc=1675,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.050964262 = fieldWeight in 1675, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1675)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    VIII, 186 S
    Type
    s
  8. Exploring artificial intelligence in the new millennium (2003) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 2099) [ClassicSimilarity], result of:
              0.002406438 = score(doc=2099,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 2099, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.180-181 (J. Walker): "My initial reaction to this book was that it would be a useful tool for researchers and students outside of the Computer science community who would like a primer of some of the many specialized research areas of artificial intelligence (AI). The book authors note that over the last couple of decades the AI community has seen significant growth and suffers from a great deal of fragmentation. Someone trying to survey some of the most important research literature from the community would find it difficult to navigate the enormous amount of materials, joumal articles, conference papers, and technical reports. There is a genuine need for a book such as this one that attempts to connect the numerous research pieces into a coherent reference source for students and researchers. The papers contained within the text were selected from the International Joint Conference an AI 2001 (IJCAI-2001). The preface warns that it is not an attempt to create a comprehensive book an the numerous areas of research in AI or subfields, but instead is a reference source for individuals interested in the current state of some research areas within AI in the new millennium. Chapter 1 of the book surveys major robot mapping algorithms; it opens with a brilliant historical overview of robot mapping and a discussion of the most significant problems that exist in the field with a focus an indoor navigation. The major approaches surveyed Kalman filter and an alternative to the Kalman, the expectation maximization. Sebastian Thrun examines how all modern approaches to robotic mapping are probabilistic in nature. In addition, the chapter concludes with a very insightful discussion into what research issues still exist in the robotic mapping community, specifically in the area of indoor navigation. The second chapter contains very interesting research an developing digital characters based an the lessons learned from dog behavior. The chapter begins similar to chapter one in that the reasoning and history of such research is presented in an insightful and concise manner. Bruce M. Blumberg takes his readers an a tour of why developing digital characters in this manner is important by showing how they benefit from the modeling of dog training patterns, and transparently demonstrates how these behaviors are emulated.
    In the third chapter, the authors present a preliminary statistical system for identifying the semantic roles of elements contained within a sentence such as the topic or individual(s) speaking. The historical context necessary for a reader to gain a true understanding of why the work is needed and what already exists is adequate, but lacking in many areas. For example, the authors examine the tension that exists between statistical systems and logie-based systems in natural language understanding in a trivial manner. A high expectation is placed an the reader to have a strong knowledge of these two areas of natural language understanding in AI research. In the fourth chapter, Derek Lang and Maria Fox examine the debate that has occurred within the AI community regarding automatically extracting domain-specific constraints for planning. The authors discuss two major planning approaches-knowledgespare and knowledge-rieh. They introduce their own approach, which reuses common features from many planning problems with specialized problem-solvers, a process of recognizing common patterns of behavior using automated technologies. The authors construct a clear and coherent picture of the field of planning within AI as well as demonstrate a clear need for their research. Also throughout the chapter there are numerous examples that provide readers with a clearer understanding of planning research. The major weakness of this chapter is the lack of discussion about the researchers' earlier version of their planning system STAN (Static Analysis Planner). They make reference to previous papers that discuss them, but little to no direct discussion. As a result, the reader is left wondering how the researchers arrived at the current version, STAN5. In Chapter 5, David J. Feet et al. look at visual motion analysis focusing an occlusion boundaries, by applying probabilistic techniques like Bayesian inference and particle filtering. The work is most applicable in the area of robotic vision. The authors do an outstanding job of developing a smooth narrative flow while simplifying complex models for visual motion analysis. This would be a good chapter for a graduate student who is looking for a research topic in Al. In the sixth chapter, Frank Wolter and Michael Zaharyaschev deal with reasoning about time and spare, which is a very difficult area of AI research. These two issues have been examined as separate entities in the past. The authors attempt to explore the two entities as one unit using different methods to generate qualitative spatiotemporal calculi and by using previous data from the area of modal logie. The research is presented in such a way that a reader with an inadequate AI concept knowledge will be quickly lost in the miasma of the research.
    Pages
    404 S
    Type
    s
  9. Challenges in knowledge representation and organization for the 21st century : integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference, 10-13 July 2002, Granada, Spain (2003) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 2679) [ClassicSimilarity], result of:
              0.002406438 = score(doc=2679,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 2679, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2679)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    6. Organization of Integrated Knowledge in the Electronic Environment. The Internet José Antonio SALVADOR OLIVÁN, José Maria ANGÓS ULLATE and Maria Jesús FERNÁNDEZ RUÍZ: Organization of the Information about Health Resources an the Internet; Eduardo PEIS, Antonio RUIZ, Francisco J. MUNOZ-FERNÁNDEZ and Francisco de ALBA QUINONES: Practical Method to Code Archive Findings Aids in Internet Marthinus; S. VAN DER WALT: An Integrated Model For The Organization Of Electronic Information/Knowledge in Small, Medium and Micro Enterprises (Smme's) in South Africa; Ricardo EITO BRUN: Software Development and Reuse as Knowledge Management; Practice Roberto POLI: Framing Information; 7. Models and Methods for Knowledge Organization and Conceptual Relationships Terence R. SMITH, Marcia Lei ZENG, and ADEPT Knowledge Organization Team: Structured Models of Scientific Concepts for Organizing, Accessing, and Using Learning Materials; M. OUSSALAH, F. GIRET and T. KHAMMACI: A kr Multi-hierarchies/Multi-Views Model for the Development of Complex Systems; Jonathan FURNER: A Unifying Model of Document Relatedness for Hybrid Search Engines; José Manuel BARRUECO and Vicente Julián INGLADA: Reference Linking in Economics: The Citec Project; Allyson CARLYLE and Lisa M. FUSCO: Equivalence in Tillett's Bibliographic Relationships Taxonomy: a Revision; José Antonio FRÍAS and Ana Belén RÍOS HILARIO: Visibility and Invisibility of the Kindship Relationships in Bibliographic Families of the Library Catalogue; 8. Integration of Knowledge in the Internet. Representing Knowledge in Web Sites Houssem ASSADI and Thomas BEAUVISAGE: A Comparative Study of Six French-Speaking Web Directories; Barbara H. KWASNIK: Commercial Web Sites and The Use of Classification Schemes: The Case of Amazon.Com; Jorge SERRANO COBOS and Ana M' QUINTERO ORTA: Design, Development and Management of an Information Recovery System for an Internet Website: from Documentary Theory to Practice; José Luis HERRERA MORILLAS and M' del Rosario FERNÁNDEZ FALERO: Information and Resources About Bibliographic Heritage an The Web Sites of the Spanish Universities; J.F. ALDANA, A.C. GÓMEZ, N. MORENO, A. J. NEBRO, M.M. ROLDÁN: Metadata Functionality for Semantic Web Integration; Uta PRISS: Alternatives to the "Semantic Web": Multi-Strategy Knowledge Representation; 9. Models and Methods for Knowledge Integration in Information Systems Rebecca GREEN, Carol A. BEAN and Michele HUDON: Universality And Basic Level Concepts; Grant CAMPBELL: Chronotope And Classification: How Space-Time Configurations Affect the Gathering of Industrial Statistical Data; Marianne LYKKE NIELSEN and Anna GJERLUF ESLAU: Corporate Thesauri - How to Ensure Integration of Knowledge and Reflections of Diversity; Nancy WILLIAMSON: Knowledge Integration and Classification Schemes; M.V. HURTADO, L. GARCIA and J.PARETS: Semantic Views over Heterogeneous and Distributed Data Repositories: Integration of Information System Based an Ontologies; Fernando ELICHIRIGOITY and Cheryl KNOTT MALONE: Representing the Global Economy: the North American Industry Classification System;
    Footnote
    Vgl. auch den Bericht über die Tagung von N. Williamson in: KO 29(2002) no.2, S.94-102
    Pages
    640 S
    Type
    s
  10. XML in libraries (2002) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 3100) [ClassicSimilarity], result of:
              0.002406438 = score(doc=3100,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 3100, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.14, S.1304-1305 (Z. Holbrooks):"The eXtensible Markup Language (XML) and its family of enabling technologies (XPath, XPointer, XLink, XSLT, et al.) were the new "new thing" only a couple of years ago. Happily, XML is now a W3C standard, and its enabling technologies are rapidly proliferating and maturing. Together, they are changing the way data is handled an the Web, how legacy data is accessed and leveraged in corporate archives, and offering the Semantic Web community a powerful toolset. Library and information professionals need a basic understanding of what XML is, and what its impacts will be an the library community as content vendors and publishers convert to the new standards. Norman Desmarais aims to provide librarians with an overview of XML and some potential library applications. The ABCs of XML contains the useful basic information that most general XML works cover. It is addressed to librarians, as evidenced by the occasional reference to periodical vendors, MARC, and OPACs. However, librarians without SGML, HTML, database, or programming experience may find the work daunting. The snippets of code-most incomplete and unattended by screenshots to illustrate the result of the code's execution-obscure more often than they enlighten. A single code sample (p. 91, a book purchase order) is immediately recognizable and sensible. There are no figures, illustrations, or screenshots. Subsection headings are used conservatively. Readers are confronted with page after page of unbroken technical text, and occasionally oddly formatted text (in some of the code samples). The author concentrates an commercial products and projects. Library and agency initiatives-for example, the National Institutes of Health HL-7 and U.S. Department of Education's GEM project-are notable for their absence. The Library of Congress USMARC to SGML effort is discussed in chapter 1, which covers the relationship of XML to its parent SGML, the XML processor, and data type definitions, using MARC as its illustrative example. Chapter 3 addresses the stylesheet options for XML, including DSSSL, CSS, and XSL. The Document Style Semantics and Specification Language (DSSSL) was created for use with SGML, and pruned into DSSSL-Lite and further (DSSSL-online). Cascading Style Sheets (CSS) were created for use with HTML. Extensible Style Language (XSL) is a further revision (and extension) of DSSSL-o specifically for use with XML. Discussion of aural stylesheets and Synchronized Multimedia Integration Language (SMIL) round out the chapter.
    Tennant's collection covers a variety of well- and lesser-known XML-based pilot and prototype projects undertaken by libraries around the world. Some of the projects included are: Stanford's XMLMARC conversion, Oregon State's use of XML in interlibrary loaning, e-books (California Digital Library) and electronic scholarly publishing (University of Michigan), the Washington Research Library Consortium's XML-based Web Services, and using TEI Lite to support indexing (Halton Hills Public Library). Of the 13 projects presented, nine are sited in academe, three are state library endeavors, and one is an American public library initiative. The projects are gathered into sections grouped by seven library applications: the use of XML in library catalog records, interlibrary loan, cataloging and indexing, collection building, databases, data migration, and systems interoperability. Each project is introduced with a few paragraphs of background information. The project reports-averaging about 13 pages each-include project goals and justification, project description, challenges and lessons learned (successes and failures), future plans, implications of the work, contact information for individual(s) responsible for the project, and relevant Web links and resources. The clear strengths of this collection are in the details and the consistency of presentation. The concise project write-ups flow well and encourage interested readers to follow-up via personal contacts and URLs. The sole weakness is the price. XML in Libraries will excite and inspire institutions and organizations with technically adept staff resources and visionary leaders. Erik Ray has written a how-to book. Unlike most, Learning XML is not aimed at the professional programming community. The intended audience is readers familiar with a structured markup (HTML, TEX, etc.) and Web concepts (hypertext links, data representation). In the first six chapters, Ray introduces XMUs main concepts and tools for writing, viewing, testing, and transforming XML (chapter 1), describes basic syntax (chapter 2), discusses linking with XLink and XPointer (chapter 3), introduces Cascading Style Sheets for use with XML (chapter 4), explains document type definitions (DTDs) and schemas (chapter 5), and covers XSLT stylesheets and XPath (chapter 6). Chapter 7 introduces Unicode, internationalization and language support, including CSS and XSLT encoding. Chapter 8 is an overview of writing software for processing XML, and includes the Perl code for an XML syntax checker. This work is written very accessibly for nonprogrammers. Writers, designers, and students just starting to acquire Web technology skills will find Ray's style approachable. Concepts are introduced in a logical flow, and explained clearly. Code samples (130+), illustrations and screen shots (50+), and numerous tables are distributed throughout the text. Ray uses a modified DocBook DTD and a checkbook example throughout, introducing concepts in early chapters and adding new concepts to them. Readers become familiar with the code and its evolution through repeated exposure. The code for converting the "barebones DocBook" DTD (10 pages of code) to HTML via XSLT stylesheet occupies 19 pages. Both code examples allow the learner to sec an accumulation of snippets incorporated into a sensible whole. While experienced programmers might not need this type of support, nonprogrammers certainly do. Using the checkbook example is an inspired choice: Most of us are familiar with personal checking, even if few of us world build an XML application for it. Learning XML is an excellent textbook. I've used it for several years as a recommended text for adult continuing education courses and workshops."
    Pages
    XI, 212 S
    Type
    s
  11. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 175) [ClassicSimilarity], result of:
              0.002406438 = score(doc=175,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 175, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=175)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: Zumer, M.: Dedication [to Zlata Dimec]; P. Le Boeuf: FRBR: Hype or Cure-All? Introduction; O.M.A. Madison: The origins of the IFLA study an functional requirements for bibliographic records; G.E. Patton: Extending FRBR to authorities; T. Delsey: Modeling subject access: extending the FRBR and FRANAR conceptual models; S. Gradmann: rdfs:frbr - Towards an implementation model for library catalogs using semantic web technology; G. Johsson: Cataloguing of hand press materials and the concept of expression in FRBR; K. Kilner: The AustLit Gateway and scholarly bibliography: a specialist implementation of the FRBR; P. Le Boeuf: Musical works in the FRBR model or "Quasi la Stessa Cosa": variations an a theme by Umberto Eco; K. Albertsen, C. van Nuys: Paradigma: FRBR and digital documents; D. Miller, P Le Boeuf: "Such stuff as dreams are made on": How does FRBR fit performing arts?; Y. Nicolas: Folklore requirements for bibliographic records: oral traditions and FRBR; B.B. Tillett: FRBR and cataloging for the future; Z. Dimec, M. Zumer, G.J.A. Riesthuis: Slovenian cataloguing practice and Functional Requirements for Bibliography Records: a comparative analysis; M. Zumer: Implementation of FRBR: European research initiative; T.B. Hicley, E.T. O'Neill: FRBRizing OCLC's WorldCat; R. Sturman: Implementing the FRBR conceptual approach in the ISIS software environment: IFPA (ISIS FRBR prototype application); J. Radebaugh, C. Keith: FRBR display tool; D.R. Miller: XOBIS - an experimental schema for unifying bibliographic and authority records
    Footnote
    Rez. in: KO 33(2006) no.1, S.57-58 (V. Francu):"The work is a collection of major contributions of qualified professionals to the issues aroused by the most controversial alternative to organizing the bibliographic universe today: the conceptual model promoted by the International Federation of Library Associations and Institutions (IFLA) known by the name of Functional Requirements for Bibliographic Records (FRBR). The main goals of the work are to clarify the fundamental concepts and terminology that the model operates with, inform the audience about the applicability of the model to different kinds of library materials and bring closer to those interested the experiments undertaken and the implementation of the model in library systems worldwide. In the beginning, Patrick LeBoeuf, the chair of the IFLA FRBR Review Group, editor of the work and author of two of the articles included in the collection, puts together in a meaningful way articles about the origins and development of the FRBR model and how it will evolve, thus facilitating a gradual understanding of its structure and functionalities. He describes in the Introduction the FRBR entities as images of bibliographic realities insisting on the "expression debate". Further he concentrates on the ongoing or planned work still needed (p. 6) for the model to be fully accomplished and ultimately offer the desired bibliographic control over the actual computerized catalogues. The FRBR model associated but not reduced to the "FRBR tree" makes it possible to map the existing linear catalogues to an ontology, or semantic Web by providing a multitude of relationships among the bibliographic entities it comprises.
    Pages
    xxii, 303 S
    Type
    s
  12. Theorizing digital cultural heritage : a critical discourse (2005) 0.00
    6.016095E-4 = product of:
      0.001203219 = sum of:
        0.001203219 = product of:
          0.002406438 = sum of:
            0.002406438 = weight(_text_:s in 1929) [ClassicSimilarity], result of:
              0.002406438 = score(doc=1929,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.048049565 = fieldWeight in 1929, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1929)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Cameron, F. u. S. Kenderdine
    Footnote
    Rez. in: JASIST 59(2008) no.8, S.1360-1361 (A. Japzon): "This is the first book since The Wired Museum to address the theoretical discourse on cultural heritage and digital media (Jones-Garmil, 1997). The editors, Fiona Cameron, a Research Fellow in Museum and Cultural Heritage Studies at the Centre for Cultural Research at the University of Western Sydney, and Sarah Kenderdine, the Director of Special Projects for the Museum Victoria, bring together 30 authors from the international cultural heritage community to provide a foundation from which to explore and to understand the evolving significance of digital media to cultural heritage. The editors offer the collection of essays as a reference work to be used by professionals, academics, and students working and researching in all fields of cultural heritage including museums, libraries, galleries, archives, and archeology. Further, they recommend the work as a primary or a secondary text for undergraduate and graduate education for these fields. The work succeeds on these counts owing to the range of cultural heritage topics covered and the depth of description on these topics. Additionally, this work would be of value to those individuals working and researching in the fields of human computer interaction and educational technology. The book is divided into three sections: Replicants/Object Morphologies; Knowledge Systems and Management: Shifting Paradigms and Models; and Cultural Heritage and Virtual Systems. Many of the themes in the first section resonate throughout the book providing consistency of language and conceptual understandings, which ultimately offers a shared knowledge base from which to engage in the theoretical discussion on cultural heritage. This review will briefly summarize selected themes and concepts from each of the sections as the work is vast in thought and rich in detail. ...
    Pages
    X, 465 S
    Type
    s
  13. Theory of subject analysis : A sourcebook (1985) 0.00
    5.317527E-4 = product of:
      0.0010635054 = sum of:
        0.0010635054 = product of:
          0.0021270108 = sum of:
            0.0021270108 = weight(_text_:s in 3622) [ClassicSimilarity], result of:
              0.0021270108 = score(doc=3622,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.042470217 = fieldWeight in 3622, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3622)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    XV,415 S
    Type
    s
  14. SIGIR'92 : Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1992) 0.00
    5.264083E-4 = product of:
      0.0010528166 = sum of:
        0.0010528166 = product of:
          0.0021056333 = sum of:
            0.0021056333 = weight(_text_:s in 6671) [ClassicSimilarity], result of:
              0.0021056333 = score(doc=6671,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04204337 = fieldWeight in 6671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=6671)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    s
  15. Research methods for students, academics and professionals : information management and systems (2002) 0.00
    5.210092E-4 = product of:
      0.0010420184 = sum of:
        0.0010420184 = product of:
          0.0020840368 = sum of:
            0.0020840368 = weight(_text_:s in 1756) [ClassicSimilarity], result of:
              0.0020840368 = score(doc=1756,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04161215 = fieldWeight in 1756, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1756)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 54(2003) no.10, S.982-983 (L. Schamber): "This book is the most recent of only about half a dozen research methods textbooks published for information science since 1980. Like the others, it is directed toward students and information professionals at an introductory level. Unlike the others, it describes an unusually wide variety of research methods, especially qualitative methods. This book is Australian, with a concern for human behavior in keeping with that country's reputation for research in the social sciences and development of qualitative data analysis software. The principal author is Kirsty Williamson, who wrote or co-wrote half the chapters. Eleven other authors contributed: Amanda Bow, Frada Burstein, Peta Darke, Ross Harvey, Graeme Johanson, Sue McKemmish, Majola Oosthuizen, Solveiga Saule, Don Schauder, Graeme Shanks, and Kerry Tanner. These writers, most of whom are affiliated with Monash University or Charles Sturt University, represent multidisciplinary and international backgrounds. The field they call information management and systems merges interests of information management or information studies (including librarianship, archives, and records management), and information systems, a subdiscipline of computing that focuses an information and communication technologies. The stated purpose of the book is to help information professionals become informed and critical consumers of research, not necessarily skilled researchers. It is geared toward explaining not only methodology, but also the philosophy, relevance, and process of research as a whole. The Introduction and Section 1 establish these themes. Chapter 1, an research and professional practice, explains the value of research for solving practical problems, maintaining effective Services, demonstrating accountability, and generally contributing to useful knowledge in the field. Chapter 2 an major research traditions presents a broad picture of positivist and interpretivist paradigms, along with a middle ground of post-positivism, in such a way as to help the new researcher grasp the assumptions underlying research. Woven into this Chapter is an explanation of how quantitative and qualitative methods complement each other, and how methodological triangulation provides confirmatory benefits. Chapter 3 offers instructions for beginning a research project, from development of the research problem, questions, and hypotheses to understanding the role of theory and synthesizing the literature review. Chapter 4 an research ethics covers unethical use of power positions by researchers, falsifying data, and plagiarism, along with general information an human subjects protections and roles of ethics committees. It includes intriguing examples of ethics cases to stimulate discussion.
    Pages
    352 S
    Type
    s
  16. XML data management : native XML and XML-enabled database systems (2003) 0.00
    5.210092E-4 = product of:
      0.0010420184 = sum of:
        0.0010420184 = product of:
          0.0020840368 = sum of:
            0.0020840368 = weight(_text_:s in 2073) [ClassicSimilarity], result of:
              0.0020840368 = score(doc=2073,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04161215 = fieldWeight in 2073, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
    Pages
    641 S
    Type
    s
  17. Current theory in library and information science (2002) 0.00
    5.210092E-4 = product of:
      0.0010420184 = sum of:
        0.0010420184 = product of:
          0.0020840368 = sum of:
            0.0020840368 = weight(_text_:s in 822) [ClassicSimilarity], result of:
              0.0020840368 = score(doc=822,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04161215 = fieldWeight in 822, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in JASIST 54(2003) no.4, S.358-359 (D.O. Case): "Having recently written a chapter an theories applied in information-seeking research (Case, 2002), I was eager to read this issue of Library Trends devoted to "Current Theory." Once in hand I found the individual articles in the issue to be of widely varying quality, and the scope to be disappointingly narrow. A more accurate title might be "Some Articles about Theory, with Even More an Bibliometrics." Eight of the thirteen articles (not counting the Editor's brief introduction) are about quantifying the growth, quality and/or authorship of literature (mostly in the sciences, with one example from the humanities). Social and psychological theories are hardly mentioned-even though one of the articles claims that nearly half of all theory invoked in LIS emanates from the social sciences. The editor, SUNY Professor Emeritus William E. McGrath, claims that the first six articles are about theory, while the rest are original research that applies theory to some problem-a characterization that I find odd. Reading his Introduction provides some clues to the curious composition of this issue. McGrath states that only in "physics and other exact sciences" are definitions of theory "well understood" (p. 309)-a view I think most psychologists and sociologists would content-and restricts his own definition of theory to "an explanation for a quantifiable phenomenon" (p. 310). In his own chapter in the issue, "Explanation and Prediction," McGrath makes it clear that he holds out hope for a "unified theory of librarianship" that would resemble those regarding "fundamental forces in physics and astronomy." However, isn't it wishful thinking to hope for a physics-like theory to emerge from particular practices (e.g., citation) and settings (e.g., libraries) when broad generalizations do not easily accrue from observation of more basic human behaviors? Perhaps this is where the emphasis an documents, rather than people, entered into the choice of material for "Current Theory." Artifacts of human behavior, such as documents, are more amenable to prediction in ways that allow for the development of theorywitness Zipf's Principle of Least Effort, the Bradford Distribution, Lotka's Law, etc. I imagine that McGrath would say that "librarianship," at least, is more about materials than people. McGrath's own contribution to this issue emphasizes measures of libraries, books and journals. By citing exemplar studies, he makes it clear that much has been done to advance measurement of library operations, and he eloquently argues for an overarching view of the various library functions and their measures. But, we have all heard similar arguments before; other disciplines, in earlier times, have made the argument that a solid foundation of empirical observation had been laid down, which would lead inevitably to a grand theory of "X." McGrath admits that "some may say the vision [of a unified theory] is naive" (p. 367), but concludes that "It remains for researchers to tie the various level together more formally . . . in constructing a comprehensive unified theory of librarianship."
    Source
    Library trends. 50(2002) no.3, S.309-574
    Type
    s
  18. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    5.210092E-4 = product of:
      0.0010420184 = sum of:
        0.0010420184 = product of:
          0.0020840368 = sum of:
            0.0020840368 = weight(_text_:s in 1796) [ClassicSimilarity], result of:
              0.0020840368 = score(doc=1796,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04161215 = fieldWeight in 1796, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
    Pages
    xiii, 380 S
    Type
    s
  19. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    4.5120713E-4 = product of:
      9.0241426E-4 = sum of:
        9.0241426E-4 = product of:
          0.0018048285 = sum of:
            0.0018048285 = weight(_text_:s in 3391) [ClassicSimilarity], result of:
              0.0018048285 = score(doc=3391,freq=2.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.036037173 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    s

Languages

  • d 20
  • m 4
  • es 2
  • nl 1
  • More… Less…

Types

  • m 301
  • el 17
  • i 6
  • r 2
  • a 1
  • More… Less…

Themes

Subjects

Classifications