Search (31190 results, page 1560 of 1560)

  1. Brand, A.: CrossRef turns one (2001) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
              0.002959963 = score(doc=1222,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 1222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1222)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.
  2. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
              0.002959963 = score(doc=3391,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  3. Evens, M.W.: Natural language interface for an expert system (2002) 0.00
    9.8665434E-5 = product of:
      0.0014799815 = sum of:
        0.0014799815 = product of:
          0.002959963 = sum of:
            0.002959963 = weight(_text_:information in 3719) [ClassicSimilarity], result of:
              0.002959963 = score(doc=3719,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.058186423 = fieldWeight in 3719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3719)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Encyclopedia of library and information science. Vol.71, [=Suppl.34]
  4. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    9.302266E-5 = product of:
      0.0013953398 = sum of:
        0.0013953398 = product of:
          0.0027906797 = sum of:
            0.0027906797 = weight(_text_:information in 175) [ClassicSimilarity], result of:
              0.0027906797 = score(doc=175,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.054858685 = fieldWeight in 175, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=175)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    What is, after all the FRBR model? The question is asked in the subtitle itself: is it a "hype or cureall?" It certainly is the talk of the day in libraries and similar institutions, a very popular topic for professional meetings, a challenging task for system vendors and food for thought for scholars both in terminology and in content. As for the solutions it offers, they enable simplified and more structured catalogues of large collections and perhaps easier ways to cataloguing resources of many different types. Once implemented in catalogues, the benefits will be both on the librarian's side and on the end user's side. According to Patrick LeBoeuf the model is a beginning and there are two directions for its development as far as the authors of the articles imply: the first, oriented to the configuration of FRANAR or FRAR, the second, oriented to what has already been established and defined as FRSAR (Functional Requirements for Subject Authority Records). The latter is meant to build a conceptual model for Group 3 entities within the FRBR framework related to the aboutness of the work and assist in an assessment of the potential for international sharing and use of subject authority data both within the library sector and beyond. A third direction, not present in the work considered, yet mentioned by the editor, is oriented towards the development of "the CIDOC CRM semantic model for cultural heritage information in museums and assimilated institutions" (p. 6). By merging the FRBR working group with the CIDOC CRM Special Interest Group a FRBR/CRM Harmonization Group has been created its scope being the "translation" of FRBR into object-oriented formalism. The work under review is the expected and welcome completion of the FRBR Final Report of 1998, addressing librarians, library science teaching staff, students, and library system vendors, a comprehensive source of information on theoretical aspects and practical application of the FRBR conceptual model. A good companion clarifying many FRBR issues the collection is remarkably well structured and offers a step-by-step insight into the model. An additional feature of the work is the very helpful index at the back of the book providing an easy access to the main topics discussed."
  5. Koch, C.: Consciousness : confessions of a romantic reductionist (2012) 0.00
    9.302266E-5 = product of:
      0.0013953398 = sum of:
        0.0013953398 = product of:
          0.0027906797 = sum of:
            0.0027906797 = weight(_text_:information in 4561) [ClassicSimilarity], result of:
              0.0027906797 = score(doc=4561,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.054858685 = fieldWeight in 4561, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4561)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    In which I introduce the ancient mind-body problem, explain why I am on a quest to use reason and empirical inquiry to solve it, acquaint you with Francis Crick, explain how he relates to this quest, make a confession, and end on a sad note -- In which I write about the wellsprings of my inner conflict between religion and reason, why I grew up wanting to be a scientist, why I wear a lapel pin of Professor Calculus, and how I acquired a second mentor late in life -- In which I explain why consciousness challenges the scientific view of the world, how consciousness can be investigated empirically with both feet firmly planted on the ground, why animals share consciousness with humans, and why self-consciousness is not as important as many people think it is -- In which you hear tales of scientist-magicians that make you look but not see, how they track the footprints of consciousness by peering into your skull, why you don't see with your eyes, and why attention and consciousness are not the same -- In which you learn from neurologists and neurosurgeons that some neurons care a great deal about celebrities, that cutting the cerebral cortex in two does not reduce consciousness by half, that color is leached from the world by the loss of a small cortical region, and that the destruction of a sugar cube-sized chunk of brain stem or thalamic tissue leaves you undead -- In which I defend two propositions that my younger self found nonsense--you are unaware of most of the things that go on in your head, and zombie agents control much of your life, even though you confidently believe that you are in charge -- In which I throw caution to the wind, bring up free will, Der ring des Nibelungen, and what physics says about determinism, explain the impoverished ability of your mind to choose, show that your will lags behind your brain's decision, and that freedom is just another word for feeling -- In which I argue that consciousness is a fundamental property of complex things, rhapsodize about integrated information theory, how it explains many puzzling facts about consciousness and provides a blueprint for building sentient machines -- In which I outline an electromagnetic gadget to measure consciousness, describe efforts to harness the power of genetic engineering to track consciousness in mice, and find myself building cortical observatories -- In which I muse about final matters considered off-limits to polite scientific discourse: to wit, the relationship between science and religion, the existence of God, whether this God can intervene in the universe, the death of my mentor, and my recent tribulations.
    Footnote
    Now it might seem that is a fairly well-defined scientific task: just figure out how the brain does it. In the end I think that is the right attitude to have. But our peculiar history makes it difficult to have exactly that attitude-to take consciousness as a biological phenomenon like digestion or photosynthesis, and figure out how exactly it works as a biological phenomenon. Two philosophical obstacles cast a shadow over the whole subject. The first is the tradition of God, the soul, and immortality. Consciousness is not a part of the ordinary biological world of digestion and photosynthesis: it is part of a spiritual world. It is sometimes thought to be a property of the soul and the soul is definitely not a part of the physical world. The other tradition, almost as misleading, is a certain conception of Science with a capital "S." Science is said to be "reductionist" and "materialist," and so construed there is no room for consciousness in Science. If it really exists, consciousness must really be something else. It must be reducible to something else, such as neuron firings, computer programs running in the brain, or dispositions to behavior. There are also a number of purely technical difficulties to neurobiological research. The brain is an extremely complicated mechanism with about a hundred billion neurons in ... (Rest nicht frei). " [https://www.nybooks.com/articles/2013/01/10/can-information-theory-explain-consciousness/].
  6. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    8.2221195E-5 = product of:
      0.0012333179 = sum of:
        0.0012333179 = product of:
          0.0024666358 = sum of:
            0.0024666358 = weight(_text_:information in 1554) [ClassicSimilarity], result of:
              0.0024666358 = score(doc=1554,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.048488684 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
  7. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    8.2221195E-5 = product of:
      0.0012333179 = sum of:
        0.0012333179 = product of:
          0.0024666358 = sum of:
            0.0024666358 = weight(_text_:information in 3370) [ClassicSimilarity], result of:
              0.0024666358 = score(doc=3370,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.048488684 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>
  8. Dahlberg, I.: How to improve ISKO's standing : ten desiderata for knowledge organization (2011) 0.00
    8.2221195E-5 = product of:
      0.0012333179 = sum of:
        0.0012333179 = product of:
          0.0024666358 = sum of:
            0.0024666358 = weight(_text_:information in 4300) [ClassicSimilarity], result of:
              0.0024666358 = score(doc=4300,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.048488684 = fieldWeight in 4300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4300)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    6. Establishment of national Knowledge Organization Institutes should be scheduled by national chapters, planned energetically and submitted to corresponding administrative authorities for support. They could be attached to research institutions, e.g., the Max-Planck or Fraunhofer Institutes in Germany or to universities. Their scope and research areas relate to the elaboration of knowledge systems of subject related concepts, according to Desideratum 1, and may be connected to training activities and KOsubject-related research work. 7. ISKO experts should not accept to be impressed by Internet and Computer Science, but should demonstrate their expertise more actively on the public plane. They should tend to take a leading part in the ISKO Secretariats and the KO Institutes, and act as consultants and informants, as well as editors of statistics and other publications. 8. All colleagues trained in the field of classification/indexing and thesauri construction and active in different countries should be identified and approached for membership in ISKO. This would have to be accomplished by the General Secretariat with the collaboration of the experts in the different secretariats of the countries, as soon as they start to work. The more members ISKO will have, the greater will be its reputation and influence. But it will also prove its professionalism by the quality of its products, especially its innovating conceptual order systems to come. 9. ISKO should-especially in view of global expansion-intensify the promotion of knowledge about its own subject area through the publications mentioned here and in further publications as deemed necessary. It should be made clear that, especially in ISKO's own publications, professional subject indexes are a sine qua non. 10. 1) Knowledge Organization, having arisen from librarianship and documentation, the contents of which has many points of contact with numerous application fields, should-although still linked up with its areas of descent-be recognized in the long run as an independent autonomous discipline to be located under the science of science, since only thereby can it fully play its role as an equal partner in all application fields; and, 2) An "at-a-first-glance knowledge order" could be implemented through the Information Coding Classification (ICC), as this system is based on an entirely new approach, namely based on general object areas, thus deviating from discipline-oriented main classes of the current main universal classification systems. It can therefore recoup by simple display on screen the hitherto lost overview of all knowledge areas and fields. On "one look", one perceives 9 object areas subdivided into 9 aspects which break down into 81 subject areas with their 729 subject fields, including further special fields. The synthesis and place of order of all knowledge becomes thus evident at a glance to everybody. Nobody would any longer be irritated by the abundance of singular apparently unrelated knowledge fields or become hesitant in his/her understanding of the world.
  9. Exploring artificial intelligence in the new millennium (2003) 0.00
    6.5776956E-5 = product of:
      9.866543E-4 = sum of:
        9.866543E-4 = product of:
          0.0019733086 = sum of:
            0.0019733086 = weight(_text_:information in 2099) [ClassicSimilarity], result of:
              0.0019733086 = score(doc=2099,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.03879095 = fieldWeight in 2099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    The book does achieve its aim of being a starting point for someone interested in the state of some areas of AI research at the beginning of the new millennium. The book's most irritating feature is the different writing styles of the authors. The book is organized as a collection of papers similar to a typical graduate survey course packet, and as a result the book does not possess a narrative flow. Also the book contains a number of other major weaknesses such as a lack of an introductory or concluding chapter. The book could greatly benefit from an introductory chapter that would introduce readers to the areas of AI, explain why such a book is needed, and explain why each author's research is important. The manner in which the book currently handles these issues is a preface that talks about some of the above issues in a superficial manner. Also such an introductory chapter could be used to expound an what level of AI mathematical and statistical knowledge is expected from readers in order to gain maximum benefit from this book. A concluding chapter would be useful to readers interested in the other areas of AI not covered by the book, as well as open issues common to all of the research presented. In addition, most of the contributors come exclusively from the computer science field, which heavily slants the work toward the computer science community. A great deal of the research presented is being used by a number of research communities outside of computer science, such as biotechnology and information technology. A wider audience for this book could have been achieved by including a more diverse range of authors showing the interdisciplinary nature of many of these fields. Also the book's editors state, "The reader is expected to have basic knowledge of AI at the level of an introductory course to the field" (p vii), which is not the case for this book. Readers need at least a strong familiarity with many of the core concepts within AI, because a number of the chapters are shallow and terse in their historical overviews. Overall, this book would be a useful tool for a professor putting together a survey course an AI research. Most importantly the book would be useful for eager graduate students in need of a starting point for their research for their thesis. This book is best suited as a reference guide to be used by individuals with a strong familiarity with AI."
  10. Gonzalez, L.: What is FRBR? (2005) 0.00
    6.5776956E-5 = product of:
      9.866543E-4 = sum of:
        9.866543E-4 = product of:
          0.0019733086 = sum of:
            0.0019733086 = weight(_text_:information in 3401) [ClassicSimilarity], result of:
              0.0019733086 = score(doc=3401,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.03879095 = fieldWeight in 3401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3401)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    National FRBR experiments The larger the bibliographic database, the greater the effect of "FRBR-like" design in reducing the appearance of duplicate records. LC, RLG, and OCLC, all influenced by FRBR, are experimenting with the redesign of their databases. LC's Network Development and MARC Standards Office has posted at its web site the results of some of its investigations into FRBR and MARC, including possible display options for bibliographic information. The design of RLG's public catalog, RedLightGreen, has been described as "FRBR-ish" by Merrilee Proffitt, RLG's program officer. If you try a search for a prolific author or much-published title in RedLightGreen, you'll probably find that the display of search results is much different than what you would expect. OCLC Research has developed a prototype "frbrized" database for fiction, OCLC FictionFinder. Try a title search for a classic title like Romeo and Juliet and observe that OCLC includes, in the initial display of results (described as "works"), a graphic indicator (stars, ranging from one to five). These show in rough terms how many libraries own the work-Romeo and Juliet clearly gets a five. Indicators like this are something resource sharing staff can consider an "ILL quality rating." If you're intrigued by FRBR's possibilities and what they could mean to resource sharing workflow, start talking. Now is the time to connect with colleagues, your local and/or consortial system vendor, RLG, OCLC, and your professional organizations. Have input into how systems develop in the FRBR world."

Authors

Languages

Types

Themes

Subjects

Classifications