Search (26237 results, page 1312 of 1312)

  1. Exploring artificial intelligence in the new millennium (2003) 0.00
    6.9088055E-5 = product of:
      0.0020035536 = sum of:
        0.0020035536 = product of:
          0.004007107 = sum of:
            0.004007107 = weight(_text_:1 in 2099) [ClassicSimilarity], result of:
              0.004007107 = score(doc=2099,freq=4.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.07676571 = fieldWeight in 2099, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2099)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.180-181 (J. Walker): "My initial reaction to this book was that it would be a useful tool for researchers and students outside of the Computer science community who would like a primer of some of the many specialized research areas of artificial intelligence (AI). The book authors note that over the last couple of decades the AI community has seen significant growth and suffers from a great deal of fragmentation. Someone trying to survey some of the most important research literature from the community would find it difficult to navigate the enormous amount of materials, joumal articles, conference papers, and technical reports. There is a genuine need for a book such as this one that attempts to connect the numerous research pieces into a coherent reference source for students and researchers. The papers contained within the text were selected from the International Joint Conference an AI 2001 (IJCAI-2001). The preface warns that it is not an attempt to create a comprehensive book an the numerous areas of research in AI or subfields, but instead is a reference source for individuals interested in the current state of some research areas within AI in the new millennium. Chapter 1 of the book surveys major robot mapping algorithms; it opens with a brilliant historical overview of robot mapping and a discussion of the most significant problems that exist in the field with a focus an indoor navigation. The major approaches surveyed Kalman filter and an alternative to the Kalman, the expectation maximization. Sebastian Thrun examines how all modern approaches to robotic mapping are probabilistic in nature. In addition, the chapter concludes with a very insightful discussion into what research issues still exist in the robotic mapping community, specifically in the area of indoor navigation. The second chapter contains very interesting research an developing digital characters based an the lessons learned from dog behavior. The chapter begins similar to chapter one in that the reasoning and history of such research is presented in an insightful and concise manner. Bruce M. Blumberg takes his readers an a tour of why developing digital characters in this manner is important by showing how they benefit from the modeling of dog training patterns, and transparently demonstrates how these behaviors are emulated.
    Isbn
    1-55860-811-7
  2. Lambe, P.: Organising knowledge : taxonomies, knowledge and organisational effectiveness (2007) 0.00
    6.9088055E-5 = product of:
      0.0020035536 = sum of:
        0.0020035536 = product of:
          0.004007107 = sum of:
            0.004007107 = weight(_text_:1 in 1804) [ClassicSimilarity], result of:
              0.004007107 = score(doc=1804,freq=4.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.07676571 = fieldWeight in 1804, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1804)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Isbn
    1-84334-227-8 (pb) *
    1-84334-228-6 (hb)
  3. Hjoerland, B.; Hartel, J.: Introduction to a Special Issue of Knowledge Organization (2003) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 3013) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=3013,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 3013, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3013)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Abstract
    It is with very great pleasure that we introduce this special issue of Knowledge Organization on Domain Analysis (DA). Domain analysis is an approach to information science (IS) that emphasizes the social, historical, and cultural dimensions of information. It asserts that collective fields of knowledge, or "domains," form the unit of analysis of information science (IS). DA, elsewhere referred to as a sociocognitive (Hjoerland, 2002b; Jacob & Shaw, 1998) or collectivist (Talja et al, 2004) approach, is one of the major metatheoretical perspectives available to IS scholars to orient their thinking and research. DA's focus an domains stands in contrast to the alternative metatheories of cognitivism and information systems, which direct attention to psychological processes and technological processes, respectively. The first comprehensive international formulation of DA as an explicit point of view was Hjoerland and Albrechtsen (1995). However, a concern for information in the context of a community can be traced back to American library historian and visionary Jesse Shera, and is visible a century ago in the earliest practices of special librarians and European documentalists. More recently, Hjoerland (1998) produced a domain analytic study of the field of psychology; Jacob and Shaw (1998) made an important interpretation and historical review of DA; while Hjoerland (2002a) offered a seminal formulation of eleven approaches to the study of domains, receiving the ASLIB 2003 Award. Fjordback Soendergaard; Andersen and Hjoerland (2003) suggested an approach based an an updated version of the UNISIST-model of scientific communication. In fall 2003, under the conference theme of "Humanizing Information Technology" DA was featured in a keynote address at the annual meeting of the American Society for Information Science and Technology (Hjorland, 2004). These publications and events are evidence of growth in representation of the DA view. To date, informal criticism of domain analysis has followed two tracks. Firstly, that DA assumes its communities to be academic in nature, leaving much of human experience unexplored. Secondly, that there is a lack of case studies illustrating the methods of domain analytic empirical research. Importantly, this special collection marks progress by addressing both issues. In the articles that follow, domains are perceived to be hobbies, professions, and realms of popular culture. Further, other papers serve as models of different ways to execute domain analytic scholarship, whether through traditional empirical methods, or historical and philosophical techniques. Eleven authors have contributed to this special issue, and their backgrounds reflect the diversity of interest in DA. Contributors come from North America, Europe, and the Middle East. Academics from leading research universities are represented. One writer is newly retired, several are in their heyday as scholars, and some are doctoral students just entering this field. This range of perspectives enriches the collection. The first two papers in this issue are invited papers and are, in our opinion, very important. Anders Oerom was a senior lecturer at the Royal Scbool of 'Library and Information Science in Denmark, Aalborg Branch. He retired from this position an March 1, 2004, and this paper is his last contribution in this position. We are grateful that he took the time to complete "Knowledge Organization in the Domain of Art Studies - History, Transition and Conceptual Changes" in spite of many other duties. Versions of the paper have previously been presented at a Ph.D-course in knowledge organization and related versions have been published in Danish and Spanish. In many respects, it represents a model of how a domain could, or should, be investigated from the DA point of view.
  4. Information systems and the economies of innovation (2003) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 3586) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=3586,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 3586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3586)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Isbn
    1-84376-018-5
  5. Spitzer, K.L.; Eisenberg, M.B.; Lowe, C.A.: Information literacy : essential skills for the information age (2004) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 3686) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=3686,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Isbn
    1-59158-143-5
  6. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 468) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=468,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Date
    1. 2.1997 9:16:32
  7. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 1166) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=1166,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 1166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1166)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
  8. Paskin, N.: Identifier interoperability : a report on two recent ISO activities (2006) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 1179) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=1179,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Abstract
    Section 2 below is based extensively on the report of the output from that workshop, with minor editorial changes to reflect points raised in the subsequent discussion. The second activity, not yet widely appreciated as being related, is the development of a content-focussed data dictionary within MPEG. ISO/IEC JTC 1/SC29, The Moving Picture Experts Group (MPEG), is formally a joint working group of ISO and the International Electrotechnical Commission. Originally best known for compression standards for audio, MPEG now includes the MPEG-21 "Multimedia Framework", which includes several components of digital rights management technology standardisation. Some of the components are already being used in digital library activities. One component is a Rights Data Dictionary that was established as a component to support activities such as the MPEG Rights Expression Language. In April 2005, the ISO/IEC Technical Management Board appointed a Registration Authority for the MPEG 21 Rights Data Dictionary (ISO/IEC Information technology - Multimedia framework (MPEG-21) - Part 6: Rights Data Dictionary, ISO/IEC 21000-6), and an implementation of the dictionary is about to be launched. However, the Dictionary design is based on a generic interoperability framework, and it will offer extensive additional possibilities. The design of the dictionary goes back to one of the major studies of the conceptual model of interoperability, <indecs>. Section 3 below provides a brief summary of the origins and possible applications of the ISO/IEC 21000-6 Dictionary.
  9. Facets: a fruitful notion in many domains : special issue on facet analysis (2008) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 3262) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=3262,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 3262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3262)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    Rez. in: KO 36(2009) no.1, S.62-63 (K. La Barre): "This special issue of Axiomathes presents an ambitious dual agenda. It attempts to highlight aspects of facet analysis (as used in LIS) that are shared by cognate approaches in philosophy, psychology, linguistics and computer science. Secondarily, the issue aims to attract others to the study and use of facet analysis. The authors represent a blend of lifetime involvement with facet analysis, such as Vickery, Broughton, Beghtol, and Dahlberg; those with well developed research agendas such as Tudhope, and Priss; and relative newcomers such as Gnoli, Cheti and Paradisi, and Slavic. Omissions are inescapable, but a more balanced issue would have resulted from inclusion of at least one researcher from the Indian school of facet theory. Another valuable addition might have been a reaction to the issue by one of the chief critics of facet analysis. Potentially useful, but absent, is a comprehensive bibliography of resources for those wishing to engage in further study, that now lie scattered throughout the issue. Several of the papers assume relative familiarity with facet analytical concepts and definitions, some of which are contested even within LIS. Gnoli's introduction (p. 127-130) traces the trajectory, extensions and new developments of this analytico- synthetic approach to subject access, while providing a laundry list of cognate approaches that are similar to facet analysis. This brief essay and the article by Priss (p. 243-255) directly addresses this first part of Gnoli's agenda. Priss provides detailed discussion of facet-like structures in computer science (p. 245- 246), and outlines the similarity between Formal Concept Analysis and facets. This comparison is equally fruitful for researchers in computer science and library and information science. By bridging into a discussion of visualization challenges for facet display, further research is also invited. Many of the remaining papers comprehensively detail the intellectual heritage of facet analysis (Beghtol; Broughton, p. 195-198; Dahlberg; Tudhope and Binding, p. 213-215; Vickery). Beghtol's (p. 131-144) examination of the origins of facet theory through the lens of the textbooks written by Ranganathan's mentor W.C.B. Sayers (1881-1960), Manual of Classification (1926, 1944, 1955) and a textbook written by Mills A Modern Outline of Classification (1964), serves to reveal the deep intellectual heritage of the changes in classification theory over time, as well as Ranganathan's own influence on and debt to Sayers.
  10. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 3370) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=3370,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 3370, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3370)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Content
    Precombined subjects, such as those shown above from Dewey, may be expressed in UDC Summary as examples of combination within various records. To express an exact match UDC class 07 has to contain example of combination 07(7) Journals. The Press - North America. In some cases we have, therefore, added examples to UDC Summary that represent exact match to Dewey Summaries. It is unfortunate that DDC has so many classes on the top level that deal with a selection of countries or languages that are given a preferred status in the scheme, and repeating these preferences in examples of combinations of UDC emulates an unwelcome cultural bias which we have to balance out somehow. This brings us to another challenge.. UDC 913(7) Regional Geography - North America [contains 2 concepts each of which has its URI] is an exact match to Dewey 917 [represented as one concept, 1 URI]. It seems that, because they represent an exact match to Dewey numbers, these UDC examples of combinations may also need a separate URIs so that they can be published as SKOS data. Albeit challenging, mapping proves to be a very useful exercise and I am looking forward to future work here especially in relation to our plans to map UDC Summary to Colon Classification. We are discussing this project with colleagues from DRTC in Bangalore (India)."
  11. Hartel, J.: ¬The case against Information and the Body in Library and Information Science (2018) 0.00
    6.106579E-5 = product of:
      0.0017709078 = sum of:
        0.0017709078 = product of:
          0.0035418156 = sum of:
            0.0035418156 = weight(_text_:1 in 5523) [ClassicSimilarity], result of:
              0.0035418156 = score(doc=5523,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.06785194 = fieldWeight in 5523, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5523)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    Vgl.: DOI: 10.1353/lib.2018.0018. Vgl. auch den Kommentar in: Lueg, C.: To be or not to be (embodied): that is not the question. In: Journal of the Association for Information Science and Technology. 71(2020) no.1, S.114-117. (Opinion paper) Two articles in a recent special issue on Information and the Body published in the journal Library Trends stand out because of the way they are identifying, albeit indirectly, a formidable challenge to library information science (LIS). In her contribution, Bates warns that understanding information behavior demands recognizing and studying "any one important element of the ecology [in which humans are embedded]." Hartel, on the other hand, suggests that LIS would not lose much but would have lots to gain by focusing on core LIS themes instead of embodied information, since the latter may be unproductive, as LIS scholars are "latecomer[s] to a mature research domain." I would argue that LIS as a discipline cannot avoid dealing with those pesky mammals aka patrons or users; like the cognate discipline and "community of communities" human computer interaction (HCI), LIS needs the interdisciplinarity to succeed. LIS researchers are uniquely positioned to help bring together LIS's deep understanding of "information" and embodiment perspectives that may or may not have been developed in other disciplines. LIS researchers need to be more explicit about what their original contribution is, though, and what may have been appropriated from other disciplines.
  12. XML data management : native XML and XML-enabled database systems (2003) 0.00
    4.885263E-5 = product of:
      0.0014167263 = sum of:
        0.0014167263 = product of:
          0.0028334525 = sum of:
            0.0028334525 = weight(_text_:1 in 2073) [ClassicSimilarity], result of:
              0.0028334525 = score(doc=2073,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.05428155 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
  13. Burnett, R.: How images think (2004) 0.00
    4.885263E-5 = product of:
      0.0014167263 = sum of:
        0.0014167263 = product of:
          0.0028334525 = sum of:
            0.0028334525 = weight(_text_:1 in 3884) [ClassicSimilarity], result of:
              0.0028334525 = score(doc=3884,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.05428155 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Date
    1. 2.1997 9:16:32
  14. Boeuf, P. le: Functional Requirements for Bibliographic Records (FRBR) : hype or cure-all (2005) 0.00
    4.885263E-5 = product of:
      0.0014167263 = sum of:
        0.0014167263 = product of:
          0.0028334525 = sum of:
            0.0028334525 = weight(_text_:1 in 175) [ClassicSimilarity], result of:
              0.0028334525 = score(doc=175,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.05428155 = fieldWeight in 175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=175)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    Rez. in: KO 33(2006) no.1, S.57-58 (V. Francu):"The work is a collection of major contributions of qualified professionals to the issues aroused by the most controversial alternative to organizing the bibliographic universe today: the conceptual model promoted by the International Federation of Library Associations and Institutions (IFLA) known by the name of Functional Requirements for Bibliographic Records (FRBR). The main goals of the work are to clarify the fundamental concepts and terminology that the model operates with, inform the audience about the applicability of the model to different kinds of library materials and bring closer to those interested the experiments undertaken and the implementation of the model in library systems worldwide. In the beginning, Patrick LeBoeuf, the chair of the IFLA FRBR Review Group, editor of the work and author of two of the articles included in the collection, puts together in a meaningful way articles about the origins and development of the FRBR model and how it will evolve, thus facilitating a gradual understanding of its structure and functionalities. He describes in the Introduction the FRBR entities as images of bibliographic realities insisting on the "expression debate". Further he concentrates on the ongoing or planned work still needed (p. 6) for the model to be fully accomplished and ultimately offer the desired bibliographic control over the actual computerized catalogues. The FRBR model associated but not reduced to the "FRBR tree" makes it possible to map the existing linear catalogues to an ontology, or semantic Web by providing a multitude of relationships among the bibliographic entities it comprises.
  15. Current theory in library and information science (2002) 0.00
    4.885263E-5 = product of:
      0.0014167263 = sum of:
        0.0014167263 = product of:
          0.0028334525 = sum of:
            0.0028334525 = weight(_text_:1 in 822) [ClassicSimilarity], result of:
              0.0028334525 = score(doc=822,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.05428155 = fieldWeight in 822, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Footnote
    However, for well over a century, major libraries in developed nations have been engaging in sophisticated measure of their operations, and thoughtful scholars have been involved along the way; if no "unified theory" has emerged thus far, why would it happen in the near future? What if "libraries" are a historicallydetermined conglomeration of distinct functions, some of which are much less important than others? It is telling that McGrath cites as many studies an brittle paper as he does investigations of reference services among his constellation of measurable services, even while acknowledging that the latter (as an aspect of "circulation") is more "essential." If one were to include in a unified theory similar phenomena outside of libraries-e.g., what happens in bookstores and WWW searches-it can be seen how difficult a coordinated explanation might become. Ultimately the value of McGrath's chapter is not in convincing the reader that a unified theory might emerge, but rather in highlighting the best in recent studies that examine library operations, identifying robust conclusions, and arguing for the necessity of clarifying and coordinating common variables and units of analysis. McGrath's article is one that would be useful for a general course in LIS methodology, and certainly for more specific lectures an the evaluation of libraries. Fra going to focus most of my comments an the remaining articles about theory, rather than the others that offer empirical results about the growth or quality of literature. I'll describe the latter only briefly. The best way to approach this issue is by first reading McKechnie and Pettigrew's thorough survey of the "Use of Theory in LIS research." Earlier results of their extensive content analysis of 1, 160 LIS articles have been published in other journals before, but is especially pertinent here. These authors find that only a third of LIS literature makes overt reference to theory, and that both usage and type of theory are correlated with the specific domain of the research (e.g., historical treatments versus user studies versus information retrieval). Lynne McKechnie and Karen Pettigrew identify four general sources of theory: LIS, the Humanities, Social Sciences and Sciences. This approach makes it obvious that the predominant source of theory is the social sciences (45%), followed by LIS (30%), the sciences (19%) and the humanities (5%) - despite a predominance (almost 60%) of articles with science-related content. The authors discuss interdisciplinarity at some length, noting the great many non-LIS authors and theories which appear in the LIS literature, and the tendency for native LIS theories to go uncited outside of the discipline. Two other articles emphasize the ways in which theory has evolved. The more general of three two is Jack Glazier and Robert Grover's update of their classic 1986 Taxonomy of Theory in LIS. This article describes an elaborated version, called the "Circuits of Theory," offering definitions of a hierarchy of terms ranging from "world view" through "paradigm," "grand theory" and (ultimately) "symbols." Glazier & Grover's one-paragraph example of how theory was applied in their study of city managers is much too brief and is at odds with the emphasis an quantitative indicators of literature found in the rest of the volume. The second article about the evolution of theory, Richard Smiraglia's "The progress of theory in knowledge organization," restricts itself to the history of thinking about cataloging and indexing. Smiraglia traces the development of theory from a pragmatic concern with "what works," to a reliance an empirical tests, to an emerging flirtation with historicist approaches to knowledge.
  16. Introducing information management : an information research reader (2005) 0.00
    4.885263E-5 = product of:
      0.0014167263 = sum of:
        0.0014167263 = product of:
          0.0028334525 = sum of:
            0.0028334525 = weight(_text_:1 in 440) [ClassicSimilarity], result of:
              0.0028334525 = score(doc=440,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.05428155 = fieldWeight in 440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.015625 = fieldNorm(doc=440)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Isbn
    1-85604-561-7
  17. Williamson, N.: Classification research issues (2004) 0.00
    4.2746047E-5 = product of:
      0.0012396354 = sum of:
        0.0012396354 = product of:
          0.0024792708 = sum of:
            0.0024792708 = weight(_text_:1 in 3727) [ClassicSimilarity], result of:
              0.0024792708 = score(doc=3727,freq=2.0), product of:
                0.05219918 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02124939 = queryNorm
                0.047496356 = fieldWeight in 3727, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=3727)
          0.5 = coord(1/2)
      0.03448276 = coord(1/29)
    
    Content
    "Universal Decimal Classification Extensions and Corrections to the UDC (E&C) is published in November of each year by the UDC Consortium in The Hague. It documents the additions and changes to the system between printed editions. Changes which have been fully approved are applied immediately to the Master Reference file (MRF). For this reason it is essential that UDC users have access to the changes as they take place. Licensed and Consortium users will become aware of the changes as they use MRF. However, for those who rely an the printed volumes, E&C is an essential tool in the application of UDC. Each issue contains three sections: 1. Comments & Communications, consisting of a collection of articles and notes an research, developments and applications of the UDC system across the world. Also included is a bibliography of recent publications an UDC for the year; 2. Revised UDC Tables, i.e. extensions and corrections to the system, fully approved for use and applied in the MRF; and, 3. Proposals, i.e. preliminary drafts of tables in the process of revision, an which UDC users are encouraged to comment and make suggestions that could affect the final result.result.

Authors

Languages

Types

Themes

Subjects

Classifications