Search (20096 results, page 1005 of 1005)

  1. Information visualization in data mining and knowledge discovery (2002) 0.00
    2.5624852E-4 = product of:
      0.0038437278 = sum of:
        0.0038437278 = product of:
          0.0076874555 = sum of:
            0.0076874555 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.0076874555 = score(doc=1789,freq=2.0), product of:
                0.0993465 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028369885 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Date
    23. 3.2008 19:10:22
  2. Davis, M.: ¬The universal computer : the road from Leibniz to Turing (2000) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 2072) [ClassicSimilarity], result of:
              0.006829767 = score(doc=2072,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 2072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2072)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    The stability of logie over time, from Aristotle to Boole, and the continual change since Boole is noted. For information science, the relative stability of forms of writing over the same period and the intensive developments in writing and message transmission (shorthand, the telegraph, codes for telegraphic transmission, the telephone, and the Internet) since the mid-nineteenth century represent parallel developments. Communication models, from the Aristotelian view of writing as a secondary symbolism for oral speech to Shannon's information theory, have characteristically developed after the technologies they can be used to describe. Information theory developed from wartime cryptography, played a part in the creation of information science, and remained influential in its early development, with some indications of revival. A theme explored by Davis in the logicians' biographies given is of discordance between the qualities required for intellectual eminence and social adjustment. An insistence an questioning practices and seeking fundamental issues can be socially disruptive: Gödel, for instance, famously questioned the consistency of the United States Constitution during his citizenship examination (p. 137). In some instances, the logical contradictions explored reflect or project the logician's biography: most obviously, Russell was both a member of a class and not a member of a dass; themes of exile can be detected in Gödel's work. The intensity with which these paradoxes are pursued may indicate the extent to which their possible biographical source was not fully known to the pursuer (Freud, 1990). A crucial issue might revolve around the idea of acceptance: the potential productivity of questioning and possibly rejecting the current state of affairs in intellectual activities; and the destructiveness of continual questioning in social life. In conclusion, the book is to be recommended for its lucidity and intelligibility and for the interest of its personalization. It could be used as supplementary reading for historical awareness in information science programs."
  3. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 2467) [ClassicSimilarity], result of:
              0.006829767 = score(doc=2467,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 2467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2467)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
  4. Gaining insight from research information (CRIS2002) : Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002 (2002) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 3592) [ClassicSimilarity], result of:
              0.006829767 = score(doc=3592,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 3592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3592)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    Workshops Data Collectors meet Data Suppliers an the Internet (DirkHennig, Wolfgang Sander-Beuermann) CERIF-2000 (Common European Research Information Format) (Andrei Lopatenko) Embedding of CRIS in a university research information management system (Jostein Helland Hauge) A European Research Information System (ERIS): an infrastructure tool in a European research world without boundaries? (M.L.H. Lalieu)
  5. Progress in visual information access and retrieval (1999) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 839) [ClassicSimilarity], result of:
              0.006829767 = score(doc=839,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 839, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=839)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Since 1988, two issues of Library Trends have been devoted to various aspects of image and multimedia information retrieval. In each issue, the editors call for a synergy across the disciplines that develop image retrieval systems and those that utilize these systems. Stam and Giral, in the 1988 issue of Library Trends titled "Linking Art Objects and Art Information," emphasize the need for a thorough understanding of the visual information-seeking behaviors of image database users. Writing in a 1990 issue of Library Trends devoted to graphical information retrieval, Mark Rorvig takes up the fundamental issue that "what can be listed cannot always be found" and uses that statement as a framework for examining progress in intellectual access to visual information. In the ensuing decade, several critical events have unfolded that have brought about some of the needed collaboration across disciplines and have enhanced the potential for advancements in the area of visual information retrieval. First, the field of computer vision has grown exponentially within the past decade, producing tools that enable the retrieval of visual information, especially for objects with no accompanying structural, administrative, or descriptive text information. Second, the Internet, more specifically the Web, has become a common channel for the transmission of graphical information, thus moving visual information retrieval rapidly from stand-alone workstations and databases into a networked environment. Third, the use of the Web to provide access to the search and retrieval mechanisms for visual and other forms of information has spawned the development of emerging standards for metadata about these objects as well as the creation of commonly employed methods to achieve interoperability across the searching of visual, textual, and other multimedia repositories. Practicality has begun to dictate that the indexing of huge collections of images by hand is a task that is both labor intensive and expensive-in many cases more than can be afforded to provide some method of intellectual access to digital image collections. In the world of text retrieval, text "speaks for itself" whereas image analysis requires a combination of high-level concept creation as well as the processing and interpretation of inherent visual features. In the area of intellectual access to visual information, the interplay between human and machine image indexing methods has begun to influence the development of visual information retrieval systems. Research and application by the visual information retrieval (VIR) community suggests that the most fruitful approaches to VIR involve analysis of the type of information being sought, the domain in which it will be used, and systematic testing to identify optimal retrieval methods.
  6. Paskin, N.: Identifier interoperability : a report on two recent ISO activities (2006) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 1179) [ClassicSimilarity], result of:
              0.006829767 = score(doc=1179,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Two significant activities within ISO, the International Organisation for Standardization, are underway, each of which has potential implications for the management of content by digital libraries and their users. Moreover these two activities are complementary and have the potential to provide tools for significantly improved identifier interoperability. This article presents a report on these: the first activity investigates the practical implications of interoperability across the family of ISO TC46/SC9 identifiers (better known as the ISBN and related identifiers); the second activity is the implementation of an ontology-based data dictionary that could provide a mechanism for this, the ISO/IEC 21000-6 standard. ISO/TC 46 is the ISO Technical Committee responsible for standards of "Information and documentation". Subcommittee 9 (SC9) of that body is responsible for "Presentation, identification and description of documents": the standards that it manages are identifiers familiar to the content and digital library communities, including the International Standard Book Number (ISBN); International Standard Serial Number (ISSN); International Standard Recording Code (ISRC); International Standard Music Number (ISMN); International Standard Audio-visual Number (ISAN) and the related Version identifier for Audio-visual Works (V-ISAN); and the International Standard Musical Work Code (ISWC). Most recently ISO has introduced the International Standard Text Code (ISTC), and is about to consider standardisation of the DOI system. The ISO identifier schemes provide numbering schemes as labels of entities of "content": many of the identifiers have as referents abstract content entities ("works" rather than a specific physical or digital form: e.g., ISAN, ISWC, ISTC). The existing schemes are numbering management schemes, not tied to any specific implementation (hence for internet "actionability", these identifiers may be incorporated into URN, URI, or DOI formats, etc.). Recently SC9 has requested that new and revised identifier schemes specify mandatory structured metadata to specify the item identified; that metadata is now becoming key to interoperability.
  7. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 1219) [ClassicSimilarity], result of:
              0.006829767 = score(doc=1219,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 1219, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1219)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The Renardus project has brought together gateways that are 'large-scale national initiatives'. Within the European context this immediately introduces a diversity of organisations, as responsibility for national gateway initiatives is located differently, for example, in national libraries, national agencies with responsibility for educational technology infrastructure, and within universities or consortia of universities. Within the project, gateways are in some cases represented directly by their own personnel, in some cases by other departments or research centres, but not always by the people responsible for providing the gateway service. For example, the UK Resource Discovery Network (RDN) is represented in the project by UKOLN (formerly part of the Resource Discovery Network Centre) and the Institute of Learning and Research Technology (ILRT), University of Bristol -- an RDN 'hub' service provider -- who are primarily responsible for dissemination. Since the start of the project there have been changes within the organisational structures providing gateways and within the service ambitions of gateways themselves. Such lack of stability is inherent within the Internet service environment, and this presents challenges to Renardus activity that has to be planned for a three-year period. For example, within the gateway's funding environment there is now an exploration of 'subject portals' offering more extended services than gateways. There is also potential commercial interest for including gateways as a value-added component to existing commercial services, and new offerings from possible competitors such as Google's Web Directory and country based services. This short update on the Renardus project intends to inform the reader of progress within the project and to give some wider context to its main themes by locating the project within the broader arena of digital library activity. There are twelve partners in the project from Denmark, Finland, France, Germany, the Netherlands and Sweden, as well as the UK. In particular we will focus on the specific activity in which UKOLN is involved: the architectural design, the specification of functional requirements, reaching consensus on a collaborative business model, etc. We will also consider issues of metadata management where all partners have interests. We will highlight implementation issues that connect to areas of debate elsewhere. In particular we see connections with activity related to establishing architectural models for digital library services, connections to the services that may emerge from metadata sharing using the Open Archives Initiative metadata sharing protocol, and links with work elsewhere on navigation of digital information spaces by means of controlled vocabularies.
  8. Kochtanek, T.R.; Matthews, J.R.: Library information systems : from library automation to distributed information systems (2002) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 1792) [ClassicSimilarity], result of:
              0.006829767 = score(doc=1792,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 1792, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1792)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: JASIST 54(2003) no.12, S.1166-1167 (Brenda Chawner): "Kochtanek and Matthews have written a welcome addition to the small set of introductory texts an applications of information technology to library and information Services. The book has fourteen chapters grouped into four sections: "The Broader Context," "The Technologies," "Management Issues," and "Future Considerations." Two chapters provide the broad content, with the first giving a historical overview of the development and adoption of "library information systems." Kochtanek and Matthews define this as "a wide array of solutions that previously might have been considered separate industries with distinctly different marketplaces" (p. 3), referring specifically to integrated library systems (ILS, and offen called library management systems in this part of the world), and online databases, plus the more recent developments of Web-based resources, digital libraries, ebooks, and ejournals. They characterize technology adoption patterns in libraries as ranging from "bleeding edge" to "leading edge" to "in the wedge" to "trailing edge"-this is a catchy restatement of adopter categories from Rogers' diffusion of innovation theory, where they are more conventionally known as "early adopters," "early majority," "late majority," and "laggards." This chapter concludes with a look at more general technology trends that have affected library applications, including developments in hardware (moving from mainframes to minicomputers to personal Computers), changes in software development (from in-house to packages), and developments in communications technology (from dedicated host Computers to more open networks to the current distributed environment found with the Internet). This is followed by a chapter describing the ILS and online database industries in some detail. "The Technologies" begins with a chapter an the structure and functionality of integrated library systems, which also includes a brief discussion of precision versus recall, managing access to internal documents, indexing and searching, and catalogue maintenance. This is followed by a chapter an open systems, which concludes with a useful list of questions to consider to determine an organization's readiness to adopt open source solutions. As one world expect, this section also includes a detailed chapter an telecommunications and networking, which includes types of networks, transmission media, network topologies, switching techniques (ranging from dial up and leased lines to ISDN/DSL, frame relay, and ATM). It concludes with a chapter an the role and importance of standards, which covers the need for standards and standards organizations, and gives examples of different types of standards, such as MARC, Dublin Core, Z39.50, and markup standards such as SGML, HTML, and XML. Unicode is also covered but only briefly. This section world be strengthened by a chapter an hardware concepts-the authors assume that their reader is already familiar with these, which may not be true in all cases (for example, the phrase "client-Server" is first used an page 11, but only given a brief definition in the glossary). Burke's Library Technology Companion: A Basic Guide for Library Staff (New York: Neal-Schuman, 2001) might be useful to fill this gap at an introductory level, and Saffady's Introduction to Automation for Librarians, 4th ed. (Chicago: American Library Association, 1999) world be better for those interested in more detail. The final two sections, however, are the book's real strength, with a strong focus an management issues, and this content distinguishes it from other books an this topic such as Ferguson and Hebels Computers for Librarians: an Introduction to Systems and Applications (Waggawagga, NSW: Centre for Information Studies, Charles Sturt University, 1998). ...
  9. Dahlberg, I.: How to improve ISKO's standing : ten desiderata for knowledge organization (2011) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 4300) [ClassicSimilarity], result of:
              0.006829767 = score(doc=4300,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 4300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4300)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    6. Establishment of national Knowledge Organization Institutes should be scheduled by national chapters, planned energetically and submitted to corresponding administrative authorities for support. They could be attached to research institutions, e.g., the Max-Planck or Fraunhofer Institutes in Germany or to universities. Their scope and research areas relate to the elaboration of knowledge systems of subject related concepts, according to Desideratum 1, and may be connected to training activities and KOsubject-related research work. 7. ISKO experts should not accept to be impressed by Internet and Computer Science, but should demonstrate their expertise more actively on the public plane. They should tend to take a leading part in the ISKO Secretariats and the KO Institutes, and act as consultants and informants, as well as editors of statistics and other publications. 8. All colleagues trained in the field of classification/indexing and thesauri construction and active in different countries should be identified and approached for membership in ISKO. This would have to be accomplished by the General Secretariat with the collaboration of the experts in the different secretariats of the countries, as soon as they start to work. The more members ISKO will have, the greater will be its reputation and influence. But it will also prove its professionalism by the quality of its products, especially its innovating conceptual order systems to come. 9. ISKO should-especially in view of global expansion-intensify the promotion of knowledge about its own subject area through the publications mentioned here and in further publications as deemed necessary. It should be made clear that, especially in ISKO's own publications, professional subject indexes are a sine qua non. 10. 1) Knowledge Organization, having arisen from librarianship and documentation, the contents of which has many points of contact with numerous application fields, should-although still linked up with its areas of descent-be recognized in the long run as an independent autonomous discipline to be located under the science of science, since only thereby can it fully play its role as an equal partner in all application fields; and, 2) An "at-a-first-glance knowledge order" could be implemented through the Information Coding Classification (ICC), as this system is based on an entirely new approach, namely based on general object areas, thus deviating from discipline-oriented main classes of the current main universal classification systems. It can therefore recoup by simple display on screen the hitherto lost overview of all knowledge areas and fields. On "one look", one perceives 9 object areas subdivided into 9 aspects which break down into 81 subject areas with their 729 subject fields, including further special fields. The synthesis and place of order of all knowledge becomes thus evident at a glance to everybody. Nobody would any longer be irritated by the abundance of singular apparently unrelated knowledge fields or become hesitant in his/her understanding of the world.
  10. Markoff, J.: Researchers announce advance in image-recognition software (2014) 0.00
    2.2765891E-4 = product of:
      0.0034148835 = sum of:
        0.0034148835 = product of:
          0.006829767 = sum of:
            0.006829767 = weight(_text_:internet in 1875) [ClassicSimilarity], result of:
              0.006829767 = score(doc=1875,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.081544876 = fieldWeight in 1875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1875)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Content
    "Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate. The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. "I consider the pixel data in images and video to be the dark matter of the Internet," said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. "We are now starting to illuminate it." Dr. Li and Mr. Karpathy published their research as a Stanford University technical report. The Google team published their paper on arXiv.org, an open source site hosted by Cornell University.
  11. XML in libraries (2002) 0.00
    2.0529801E-4 = product of:
      0.00307947 = sum of:
        0.00307947 = weight(_text_:und in 3100) [ClassicSimilarity], result of:
          0.00307947 = score(doc=3100,freq=2.0), product of:
            0.06287808 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028369885 = queryNorm
            0.048975255 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.015625 = fieldNorm(doc=3100)
      0.06666667 = coord(1/15)
    
    Content
    Sammelrezension mit: (1) The ABCs of XML: The Librarian's Guide to the eXtensible Markup Language. Norman Desmarais. Houston, TX: New Technology Press, 2000. 206 pp. $28.00. (ISBN: 0-9675942-0-0) und (2) Learning XML. Erik T. Ray. Sebastopol, CA: O'Reilly & Associates, 2003. 400 pp. $34.95. (ISBN: 0-596-00420-6)
  12. XML data management : native XML and XML-enabled database systems (2003) 0.00
    1.8212713E-4 = product of:
      0.0027319067 = sum of:
        0.0027319067 = product of:
          0.0054638134 = sum of:
            0.0054638134 = weight(_text_:internet in 2073) [ClassicSimilarity], result of:
              0.0054638134 = score(doc=2073,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.0652359 = fieldWeight in 2073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Internet
  13. Burnett, R.: How images think (2004) 0.00
    1.8212713E-4 = product of:
      0.0027319067 = sum of:
        0.0027319067 = product of:
          0.0054638134 = sum of:
            0.0054638134 = weight(_text_:internet in 3884) [ClassicSimilarity], result of:
              0.0054638134 = score(doc=3884,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.0652359 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
  14. Janes, J.: Introduction to reference work in the digital age. (2003) 0.00
    1.8212713E-4 = product of:
      0.0027319067 = sum of:
        0.0027319067 = product of:
          0.0054638134 = sum of:
            0.0054638134 = weight(_text_:internet in 3993) [ClassicSimilarity], result of:
              0.0054638134 = score(doc=3993,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.0652359 = fieldWeight in 3993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3993)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Internet
  15. Current theory in library and information science (2002) 0.00
    1.8212713E-4 = product of:
      0.0027319067 = sum of:
        0.0027319067 = product of:
          0.0054638134 = sum of:
            0.0054638134 = weight(_text_:internet in 822) [ClassicSimilarity], result of:
              0.0054638134 = score(doc=822,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.0652359 = fieldWeight in 822, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.015625 = fieldNorm(doc=822)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    There is only one article in the issue that claims to offer a theory of the scope that discussed by McGrath, and I am sorry that it appears in this issue. Bor-Sheng Tsai's "Theory of Information Genetics" is an almost incomprehensible combination of four different "models" wich names like "Möbius Twist" and "Clipping-Jointing." Tsai starts by posing the question "What is it that makes the `UNIVERSAL' information generating, representation, and transfer happen?" From this ungrammatical beginning, things get rapidly worse. Tsai makes side trips into the history of defining information, offers three-dimensional plots of citation data, a formula for "bonding relationships," hypothetical data an food consumption, sample pages from a web-based "experts directory" and dozens of citations from works which are peripheral to the discussion. The various sections of the article seem to have little to do with one another. I can't believe that the University of Illinois would publish something so poorly-edited. Now I will turn to the dominant, "bibliometric" articles in this issue, in order of their appearance: Judit Bar-Ilan and Bluma Peritz write about "Informetric Theories and Methods for Exploring the Internet." Theirs is a survey of research an patterns of electronic publication, including different ways of sampling, collecting and analyzing data an the Web. Their contribution to the "theory" theme lies in noting that some existing bibliometric laws apply to the Web. William Hood and Concepción Wilson's article, "Solving Problems ... Using Fuzzy Set Theory," demonstrates the widespread applicability of this mathematical tool for library-related problems, such as making decisions about the binding of documents, or improving document retrieval. Ronald Rosseau's piece an "Journal Evaluation" discusses the strength and weaknesses of various indicators for determining impact factors and rankings for journals. His is an exceptionally well-written article that has everything to do with measurement but almost nothing to do with theory, to my way of thinking. "The Matthew Effect for Countries" is the topic of Manfred Bonitz's paper an citations to scientific publications, analyzed by nation of origin. His research indicates that publications from certain countries-such as Switzerland, Denmark, the USA and the UK-receive more than the expected number of citations; correspondingly, some rather large countries like China receive much fewer than might be expected. Bonitz provides an extensive discussion of how the "MEC" measure came about, and what it ments-relating it to efficiency in scientific research. A bonus is his detour into the origins of the Matthew Effect in the Bible, and the subsequent popularization of the name by the sociologist Robert Merton. Wolfgang Glänzel's "Coauthorship patterns and trends in the sciences (1980-1998)" is, as the title implies, another citation analysis. He compares the number of authors an papers in three fields-Biomedical research, Chemistry and Mathematics - at sixyear intervals. Among other conclusions, Glänzel notes that the percentage of publications with four or more authors has been growing in all three fields, and that multiauthored papers are more likely to be cited.
  16. Broughton, V.: Essential classification (2004) 0.00
    1.8212713E-4 = product of:
      0.0027319067 = sum of:
        0.0027319067 = product of:
          0.0054638134 = sum of:
            0.0054638134 = weight(_text_:internet in 2824) [ClassicSimilarity], result of:
              0.0054638134 = score(doc=2824,freq=2.0), product of:
                0.0837547 = queryWeight, product of:
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.028369885 = queryNorm
                0.0652359 = fieldWeight in 2824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.9522398 = idf(docFreq=6276, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2824)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."

Authors

Languages

Types

  • a 15170
  • m 2816
  • el 1264
  • s 805
  • x 639
  • i 207
  • r 153
  • b 102
  • ? 82
  • n 60
  • p 30
  • l 26
  • h 25
  • d 18
  • u 16
  • fi 11
  • z 4
  • v 2
  • au 1
  • ms 1
  • More… Less…

Themes

Subjects

Classifications