Search (168 results, page 9 of 9)

  • × language_ss:"e"
  • × type_ss:"s"
  • × year_i:[2000 TO 2010}
  1. Software for Indexing (2003) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 2294) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=2294,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 2294, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2294)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Part 3, Online and Web Indexing Software, opens with a chapter in which the functionalities of HTML/Prep, HTML Indexer, and RoboHELP HTML Edition are compared. The following three chapters look at them individually. This section helps clarify the basic types of non-database web indexing - that used for back-of-the-book style indexes, and that used for online help indexes. The first chapter of Part 4, Database and image software, begins with a good discussion of what database indexing is, but falls to carry through with any listing of general characteristics, problems and attributes that should be considered when choosing database indexing software. It does include the results of an informal survey an the Yahoogroups database indexing site, as well as three short Gase studies an database indexing projects. The survey provides interesting information about freelancing, but it is not very useful if you are trying to gather information about different software. For example, the most common type of software used by those surveyed turns out to be word-processing software. This seems an odd/awkward choice, and it would have been helpful to know how and why the non-specialized software is being used. The survey serves as a snapshot of a particular segment of database indexing practice, but is not helpful if you are thinking about purchasing, adapting, or commissioning software. The three case studies give an idea of the complexity of database indexing and there is a helpful bibliography.
    A chapter an image indexing starts with a useful discussion of the elements of bibliographic description needed for visual materials and of the variations in the functioning and naming of functions in different software packaltes. Sample features are discussed in light of four different software systems: MAVIS, Convera Screening Room, CONTENTdm, and Virage speech and pattern recognition programs. The chapter concludes with an overview of what one has to consider when choosing a system. The last chapter in this section is an oddball one an creating a back-ofthe-book index using Microsoft Excel. The author warns: "It is not pretty, and it is not recommended" (p.209). A curiosity, but it should have been included as a counterpoint in the first part, not as part of the database indexing section. The final section begins with an excellent article an voice recognition software (Dragon Naturally Speaking Preferred), followed by a look at "automatic indexing" through a critique of Sonar Bookends Automatic Indexing Generator. The final two chapters deal with Data Harmony's Machine Aided Indexer; one of them refers specifically to a news content indexing system. In terms of scope, this reviewer would have liked to see thesaurus management software included since thesaurus management and the integration of thesauri with database indexing software are common and time-consuming concerns. There are also a few editorial glitches, such as the placement of the oddball article and inconsistent uses of fonts and caps (eg: VIRAGE and Virage), but achieving consistency with this many authors is, indeed, a difficult task. More serious is the fact that the index is inconsistent. It reads as if authors submitted their own keywords which were then harmonized, so that the level of indexing varies by chapter. For example, there is an entry for "controlled vocabulary" (p.265) (singular) with one locator, no cross-references. There is an entry for "thesaurus software" (p.274) with two locators, plus a separate one for "Thesaurus Master" (p.274) with three locators. There are also references to thesauri/ controlled vocabularies/taxonomies that are not mentioned in the index (e.g., the section Thesaurus management an p.204). This is sad. All too often indexing texts have poor indexes, I suppose because we are as prone to having to work under time pressures as the rest of the authors and editors in the world. But a good index that meets basic criteria should be a highlight in any book related to indexing. Overall this is a useful, if uneven, collection of articles written over the past few years. Because of the great variation between articles both in subject and in approach, there is something for everyone. The collection will be interesting to anyone who wants to be aware of how indexing software works and what it can do. I also definitely recommend it for information science teaching collections since the explanations of the software carry implicit in them descriptions of how the indexing process itself is approached. However, the book's utility as a guide to purchasing choices is limited because of the unevenness; the vendor-written articles and testimonials are interesting and can certainly be helpful, but there are not nearly enough objective reviews. This is not a straight listing and comparison of software packaltes, but it deserves wide circulation since it presents an overall picture of the state of indexing software used by freelancers."
    Imprint
    Medford, NJ : Information Today, in association with the American Society of Indexers
  2. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.00
    1.6278966E-4 = product of:
      0.0024418447 = sum of:
        0.0024418447 = product of:
          0.0048836893 = sum of:
            0.0048836893 = weight(_text_:information in 5089) [ClassicSimilarity], result of:
              0.0048836893 = score(doc=5089,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0960027 = fieldWeight in 5089, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  3. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    1.6111998E-4 = product of:
      0.0024167995 = sum of:
        0.0024167995 = product of:
          0.004833599 = sum of:
            0.004833599 = weight(_text_:information in 1796) [ClassicSimilarity], result of:
              0.004833599 = score(doc=1796,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09501803 = fieldWeight in 1796, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1796)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    In this collection of nearly 50 articles written by librarians, computer specialists, and other information professionals, the reader finds 10 chapters, each devoted to a problem or a side effect that has emerged since the introduction of the Internet: control over selection, survival of the book, training users, adapting to users' expectations, access issues, cost of technology, continuous retraining, legal issues, disappearing data, and how to avoid becoming blind sided. After stating a problem, each chapter offers solutions that are subsequently supported by articles. The editor's comments, which appear throughout the text, are an added bonus, as are the sections concluding the book, among them a listing of useful URLs, a works-cited section, and a comprehensive index. This book has much to recommend it, especially the articles, which are not only informative, thought-provoking, and interesting but highly readable and accessible as well. An indispensable tool for all librarians.
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
    Unlike muck of the professional library literature, Net Effects is not an open-aimed embrace of technology. Block even suggests that it is helpful to have a Luddite or two an each library staff to identify the setbacks associated with technological advances in the library. Each of the book's 10 chapters deals with one Internet-related problem, such as "Chapter 4-The Shifted Librarian: Adapting to the Changing Expectations of Our Wired (and Wireless) Users," or "Chapter 8-Up to Our Ears in Lawyers: Legal Issues Posed by the Net." For each of these 10 problems, multiple solutions are offered. For example, for "Chapter 9-Disappearing Data," four solutions are offered. These include "Link-checking," "Have a technological disaster plan," "Advise legislators an the impact proposed laws will have," and "Standards for preservation of digital information." One article is given to explicate each of these four solutions. A short bibliography of recommended further reading is also included for each chapter. Block provides a short introduction to each chapter, and she comments an many of the entries. Some of these comments seem to be intended to provide a research basis for the proposed solutions, but they tend to be vague generalizations without citations, such as, "We know from research that students would rather ask each other for help than go to adults. We can use that (p. 91 )." The original publication dates of the entries range from 1997 to 2002, with the bulk falling into the 2000-2002 range. At up to 6 years old, some of the articles seem outdated, such as a 2000 news brief announcing the creation of the first "customizable" public library Web site (www.brarydog.net). These critiques are not intended to dismiss the volume entirely. Some of the entries are likely to find receptive audiences, such as a nuts-and-bolts instructive article for making Web sites accessible to people with disabilities. "Providing Equitable Access," by Cheryl H. Kirkpatrick and Catherine Buck Morgan, offers very specific instructions, such as how to renovate OPAL workstations to suit users with "a wide range of functional impairments." It also includes a useful list of 15 things to do to make a Web site readable to most people with disabilities, such as, "You can use empty (alt) tags (alt="') for images that serve a purely decorative function. Screen readers will skip empty (alt) tags" (p. 157). Information at this level of specificity can be helpful to those who are faced with creating a technological solution for which they lack sufficient technical knowledge or training.
    Some of the pieces are more captivating than others and less "how-to" in nature, providing contextual discussions as well as pragmatic advice. For example, Darlene Fichter's "Blogging Your Life Away" is an interesting discussion about creating and maintaining blogs. (For those unfamiliar with the term, blogs are frequently updated Web pages that ]ist thematically tied annotated links or lists, such as a blog of "Great Websites of the Week" or of "Fun Things to Do This Month in Patterson, New Jersey.") Fichter's article includes descriptions of sample blogs and a comparison of commercially available blog creation software. Another article of note is Kelly Broughton's detailed account of her library's experiences in initiating Web-based reference in an academic library. "Our Experiment in Online Real-Time Reference" details the decisions and issues that the Jerome Library staff at Bowling Green State University faced in setting up a chat reference service. It might be useful to those finding themselves in the same situation. This volume is at its best when it eschews pragmatic information and delves into the deeper, less ephemeral libraryrelated issues created by the rise of the Internet and of the Web. One of the most thought-provoking topics covered is the issue of "the serials pricing crisis," or the increase in subscription prices to journals that publish scholarly work. The pros and cons of moving toward a more free-access Web-based system for the dissemination of peer-reviewed material and of using university Web sites to house scholars' other works are discussed. However, deeper discussions such as these are few, leaving the volume subject to rapid aging, and leaving it with an audience limited to librarians looking for fast technological fixes."
    Imprint
    Medford, NJ : Information Today
  4. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    1.424113E-4 = product of:
      0.0021361695 = sum of:
        0.0021361695 = product of:
          0.004272339 = sum of:
            0.004272339 = weight(_text_:information in 3262) [ClassicSimilarity], result of:
              0.004272339 = score(doc=3262,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.083984874 = fieldWeight in 3262, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
    Several of the papers are clearly written as primers and neatly address the second agenda item: attracting others to the study and use of facet analysis. The most valuable papers are written in clear, approachable language. Vickery's paper (p. 145-160) is a clarion call for faceted classification and facet analysis. The heart of the paper is a primer for central concepts and techniques. Vickery explains the value of using faceted classification in document retrieval. Also provided are potential solutions to thorny interface and display issues with facets. Vickery looks to complementary themes in knowledge organization, such as thesauri and ontologies as potential areas for extending the facet concept. Broughton (p. 193-210) describes a rigorous approach to the application of facet analysis in the creation of a compatible thesaurus from the schedules of the 2nd edition of the Bliss Classification (BC2). This discussion of exemplary faceted thesauri, recent standards work, and difficulties encountered in the project will provide valuable guidance for future research in this area. Slavic (p. 257-271) provides a challenge to make faceted classification come 'alive' through promoting the use of machine-readable formats for use and exchange in applications such as Topic Maps and SKOS (Simple Knowledge Organization Systems), and as supported by the standard BS8723 (2005) Structured Vocabulary for Information Retrieval. She also urges designers of faceted classifications to get involved in standards work. Cheti and Paradisi (p. 223-241) outline a basic approach to converting an existing subject indexing tool, the Nuovo Soggetario, into a faceted thesaurus through the use of facet analysis. This discussion, well grounded in the canonical literature, may well serve as a primer for future efforts. Also useful for those who wish to construct faceted thesauri is the article by Tudhope and Binding (p. 211-222). This contains an outline of basic elements to be found in exemplar faceted thesauri, and a discussion of project FACET (Faceted Access to Cultural heritage Terminology) with algorithmically-based semantic query expansion in a dataset composed of items from the National Museum of Science and Industry indexed with AAT (Art and Architecture Thesaurus). This paper looks to the future hybridization of ontologies and facets through standards developments such as SKOS because of the "lightweight semantics" inherent in facets.
    Two of the papers revisit the interaction of facets with the theory of integrative levels, which posits that the organization of the natural world reflects increasingly interdependent complexity. This approach was tested as a basis for the creation of faceted classifications in the 1960s. These contemporary treatments of integrative levels are not discipline-driven as were the early approaches, but instead are ontological and phenomenological in focus. Dahlberg (p. 161-172) outlines the creation of the ICC (Information Coding System) and the application of the Systematifier in the generation of facets and the creation of a fully faceted classification. Gnoli (p. 177-192) proposes the use of fundamental categories as a way to redefine facets and fundamental categories in "more universal and level-independent ways" (p. 192). Given that Axiomathes has a stated focus on "contemporary issues in cognition and ontology" and the following thesis: "that real advances in contemporary science may depend upon a consideration of the origins and intellectual history of ideas at the forefront of current research," this venue seems well suited for the implementation of the stated agenda, to illustrate complementary approaches and to stimulate research. As situated, this special issue may well serve as a bridge to a more interdisciplinary dialogue about facet analysis than has previously been the case."
  5. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.00
    1.3155391E-4 = product of:
      0.0019733086 = sum of:
        0.0019733086 = product of:
          0.0039466172 = sum of:
            0.0039466172 = weight(_text_:information in 3670) [ClassicSimilarity], result of:
              0.0039466172 = score(doc=3670,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.0775819 = fieldWeight in 3670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The concept of semantic roles has been central to linguistic theory for many decades. More specifically, the assumption of such representations as mediators in the correspondence between a linguistic form and its associated meaning has helped to address a number of critical issues related to grammatical phenomena. Furthermore, in addition to featuring in all major theories of grammar, semantic (or 'thematic') roles have been referred to extensively within a wide range of other linguistic subdisciplines, including language typology and psycho-/neurolinguistics. This volume brings together insights from these different perspectives and thereby, for the first time, seeks to build upon the obvious potential for cross-fertilisation between hitherto autonomous approaches to a common theme. To this end, a view on semantic roles is adopted that goes beyond the mere assumption of generalised roles, but also focuses on their hierarchical organisation. The book is thus centred around the interdisciplinary examination of how these hierarchical dependencies subserve argument linking - both in terms of linguistic theory and with respect to real-time language processing - and how they interact with other information types in this process. Furthermore, the contributions examine the interaction between the role hierarchy and the conceptual content of (generalised) semantic roles and investigate their cross-linguistic applicability and psychological reality, as well as their explanatory potential in accounting for phenomena in the domain of language disorders. In bridging the gap between different disciplines, the book provides a valuable overview of current thought on semantic roles and argument linking, and may further serve as a point of departure for future interdisciplinary research in this area. As such, it will be of interest to scientists and advanced students in all domains of linguistics and cognitive science.
  6. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
              0.002959963 = score(doc=3391,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  7. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    9.302266E-5 = product of:
      0.0013953398 = sum of:
        0.0013953398 = product of:
          0.0027906797 = sum of:
            0.0027906797 = weight(_text_:information in 175) [ClassicSimilarity], result of:
              0.0027906797 = score(doc=175,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.054858685 = fieldWeight in 175, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=175)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    What is, after all the FRBR model? The question is asked in the subtitle itself: is it a "hype or cureall?" It certainly is the talk of the day in libraries and similar institutions, a very popular topic for professional meetings, a challenging task for system vendors and food for thought for scholars both in terminology and in content. As for the solutions it offers, they enable simplified and more structured catalogues of large collections and perhaps easier ways to cataloguing resources of many different types. Once implemented in catalogues, the benefits will be both on the librarian's side and on the end user's side. According to Patrick LeBoeuf the model is a beginning and there are two directions for its development as far as the authors of the articles imply: the first, oriented to the configuration of FRANAR or FRAR, the second, oriented to what has already been established and defined as FRSAR (Functional Requirements for Subject Authority Records). The latter is meant to build a conceptual model for Group 3 entities within the FRBR framework related to the aboutness of the work and assist in an assessment of the potential for international sharing and use of subject authority data both within the library sector and beyond. A third direction, not present in the work considered, yet mentioned by the editor, is oriented towards the development of "the CIDOC CRM semantic model for cultural heritage information in museums and assimilated institutions" (p. 6). By merging the FRBR working group with the CIDOC CRM Special Interest Group a FRBR/CRM Harmonization Group has been created its scope being the "translation" of FRBR into object-oriented formalism. The work under review is the expected and welcome completion of the FRBR Final Report of 1998, addressing librarians, library science teaching staff, students, and library system vendors, a comprehensive source of information on theoretical aspects and practical application of the FRBR conceptual model. A good companion clarifying many FRBR issues the collection is remarkably well structured and offers a step-by-step insight into the model. An additional feature of the work is the very helpful index at the back of the book providing an easy access to the main topics discussed."
  8. Exploring artificial intelligence in the new millennium (2003) 0.00
    6.5776956E-5 = product of:
      9.866543E-4 = sum of:
        9.866543E-4 = product of:
          0.0019733086 = sum of:
            0.0019733086 = weight(_text_:information in 2099) [ClassicSimilarity], result of:
              0.0019733086 = score(doc=2099,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.03879095 = fieldWeight in 2099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."

Languages

Types

  • m 141
  • el 4
  • More… Less…

Subjects

Classifications