Search (6271 results, page 314 of 314)

  • × language_ss:"e"
  1. Spitzer, K.L.; Eisenberg, M.B.; Lowe, C.A.: Information literacy : essential skills for the information age (2004) 0.00
    0.0036002535 = product of:
      0.007200507 = sum of:
        0.007200507 = product of:
          0.014401014 = sum of:
            0.014401014 = weight(_text_:data in 3686) [ClassicSimilarity], result of:
              0.014401014 = score(doc=3686,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.08734013 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Chapter two delves more deeply into the historical evolution of the concept of information literacy, and chapter three summarizes selected information literacy research. Researchers generally agree that information literacy is a process, rather than a set of skills to be learned (despite the unfortunate use of the word "skills" in the ALA definition). Researchers also generally agree that information literacy should be taught across the curriculum, as opposed to limiting it to the library or any other single educational context or discipline. Chapter four discusses economic ties to information literacy, suggesting that countries with information literate populations will better succeed economically in the current and future information-based world economy. A recent report issued by the Basic Education Coalition, an umbrella group of 19 private and nongovernmental development and relief organizations, supports this claim based an meta-analysis of large bodies of data collected by the World Bank, the United Nations, and other international organizations. Teach a Child, Transform a Nation (Basic Education Coalition, 2004) concluded that no modern nation has achieved sustained economic growth without providing near universal basic education for its citizens. It also concluded that countries that improve their literacy rates by 20 to 30% sec subsequent GDP increases of 8 to 16%. In light of the Coalition's finding that one fourth of adults in the world's developing countries are unable to read or write, the goal of worldwide information literacy seems sadly unattainable for the present, a present in which even universal basic literacy is still a pipedream. Chapter live discusses information literacy across the curriculum as an interpretation of national standards. The many examples of school and university information literacy programs, standards, and policies detailed throughout the volume world be very useful to educators and administrators engaging in program planning and review. For example, the authors explain that economics standards included in the Goals 2000: Educate America Act are comprised of 20 benchmark content standards. They quote a two-pronged grade 12 benchmark that first entails students being able to discuss how a high school senior's working 20 hours a week while attending school might result in a reduced overall lifetime income, and second requires students to be able to describe how increasing the federal minimum wage might result in reduced income for some workers. The authors tie this benchmark to information literacy as follows: "Economic decision making requires complex thinking skills because the variables involved are interdependent.
  2. Culture and identity in knowledge organization : Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada (2008) 0.00
    0.0036002535 = product of:
      0.007200507 = sum of:
        0.007200507 = product of:
          0.014401014 = sum of:
            0.014401014 = weight(_text_:data in 2494) [ClassicSimilarity], result of:
              0.014401014 = score(doc=2494,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.08734013 = fieldWeight in 2494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    MULTILINGUAL AND MULTICULTURAL ENVIRONMENTS K. S. Raghavan and A. Neelameghan. Design and Development of a Bilingual Thesaurus for Classical Tamil Studies: Experiences and Issues. - Elaine Menard. Indexing and Retrieving Images in a Multilingual World. Maria Odaisa Espinheiro de Oliveira. Knowledge Representation Focusing Amazonian Culture. - Agnes Hajdu Barät. Knowledge Organization in the Cross-cultural and Multicultural Society. - Joan S. Mitchell, Ingebjorg Rype and Magdalena Svanberg. Mixed Translation Models for the Dewey Decimal Classification (DDC) System. - Kathryn La Barre. Discovery and Access Systems for Websites and Cultural Heritage Sites: Reconsidering the Practical Application of Facets. - Mats Dahlström and Joacim Hansson. On the Relation Between Qualitative Digitization and Library Institutional Identity. - Amelia A breu. "Every Bit Informs Another": Framework Analysis for Descriptive Practice and Linked Information. - Jenn Riley. Moving from a Locally-developed Data Model to a Standard Conceptual Model. - Jan Pisanski and Maja Zumer. How Do Non-librarians See the Bibliographie Universe?
  3. Williamson, N.: Classification research issues (2004) 0.00
    0.003564069 = product of:
      0.007128138 = sum of:
        0.007128138 = product of:
          0.014256276 = sum of:
            0.014256276 = weight(_text_:data in 3727) [ClassicSimilarity], result of:
              0.014256276 = score(doc=3727,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.08646232 = fieldWeight in 3727, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=3727)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    With each issue of E&C another step takes place in the modernization of the system. The year 2004 is no exception. The Editorial Committee has been marking its 10th anniversary with a "proper springcleaning and tidying up of the many textual inconsistencies and typographical mistakes that were inherited and have crept in over that period." (E&C 2004, 5). These corrections will not appear in E&C but were to be reflected in the files to be released to Consortium members and licence holders in January 2005. With E&C 2004 the work an Table le Common Auxiliaries of Place continued, with Slovakia and Slovenia in Eastern Europe and many of the countries in Africa. Each country is introduced with an editorial note (EN) explaining its origin and the nature of its internal division. The work an the Auxiliaries of Place is expected to be completed in 2005. "This will then mean that all the parts of the world previously designated by alphabetical extensions have now been listed properly and it is possible to use the classification for gazetteer information as well as a means of arranging data" (E&C 2004, 5). Also in this edition, there is a complete revision of 37 Education "so as to incorporate more up-to-date concepts than was previously the Gase and to eliminate the enumeration of compound concepts by a single notation symbol" (E&C 2004, 5). Major changes have been made to the History of Scotland and the History of Ireland. In 2004, under "Proposed UDC Tables" the work an Class 61 Medical Sciences continues with proposals for the Nervous system and the Sense organs and special senses. The hope is that this phase of the work - the conversion of the tables to structure used in the Bliss Bibliographic Classification - will be completed early in 2006 and final editing of Class 61 can begin. An "Annex" to the 2004 volume contains "(The first part of) An extended place table." This annex recognizes the fact that the Auxiliaries of Place, as currently being developed, are related to the "medium" (now "standard") edition of the English Text, while some UDC users continue to work with versions of the old "full edition" level of detail. It addresses the need to bridge the gap between the two, in lieu of a needed "authoritative standard version" (E&C 2004, 176) which, one hopes will be published at a future date. This extended version of Table le "derives from the old "full editions" but is updated in accordance with E&C. The author recognizes that it may contain inconsistencies, but deems it useful to have this table as a statement of all that is valid in Table le including details of older editions that have not been cancelled. As indicated by the words "part of," space did not permit the publication of the whole table in E&C 2004. It is intended that the remainder will be published in E&C 2005.
    Victoria Frâncu, in her article "UDC-based thesauri and multiple access to information" compares the performance to two UDC structures in retrieval from an experimental database. Also related to UDC and retrieval is the article by Woulter Schallier "What a subject search interface can do." In this research, carried out at the K.U. Leuven University Library in Belgium, an experimental interface was developed for subject searching by UDC in an OPAC. The user searches by subject terms and obtains retrieval in which he/she can browse the terms displayed in a hierarchy of terms. Two of the papers are in languages other than English. "Summary of the activities of VINITI in the field of UDC," by Professor Y Arskiy is in Russian and "AENOR y la offerta de productos CDU," by Ana López is in Spanish. The latter describes several products of AENOR which are supportive of the application of the Spanish version of UDC. An article by Barbara Holder of the Forintek Canada Corporation, discusses "Updating the Global Forest Decimal Classification (GFDC)." This system is described as a sister classification to UDC designed to handle materials an forestry-related information resources. It can be used in conjunction with UDC to provide for non-forestry related materials. In addition there is a bibliography of UDC publications for the year prepared by Aida Slavic, who has also prepared a paper entitled "UDC translations: A 2004 survey report and bibliography" This discussion paper, accompanied by a table summarizes data an 38 translations, all but seven of which were published since the last survey of UDC translations carried out in 1982. Her article updates the previous work and brings together important information about the history and development of the various versions of UDC.
  4. National Seminar on Classification in the Digital Environment : Papers contributed to the National Seminar an Classification in the Digital Environment, Bangalore, 9-11 August 2001 (2001) 0.00
    0.0035324458 = product of:
      0.0070648915 = sum of:
        0.0070648915 = product of:
          0.014129783 = sum of:
            0.014129783 = weight(_text_:22 in 2047) [ClassicSimilarity], result of:
              0.014129783 = score(doc=2047,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.07738023 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 1.2004 10:35:22
  5. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.00
    0.0035324458 = product of:
      0.0070648915 = sum of:
        0.0070648915 = product of:
          0.014129783 = sum of:
            0.014129783 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.014129783 = score(doc=3964,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
  6. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.00
    0.0035324458 = product of:
      0.0070648915 = sum of:
        0.0070648915 = product of:
          0.014129783 = sum of:
            0.014129783 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
              0.014129783 = score(doc=5580,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.07738023 = fieldWeight in 5580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  7. Ratzan, L.: Understanding information systems : what they do and why we need them (2004) 0.00
    0.0028802026 = product of:
      0.005760405 = sum of:
        0.005760405 = product of:
          0.01152081 = sum of:
            0.01152081 = weight(_text_:data in 4581) [ClassicSimilarity], result of:
              0.01152081 = score(doc=4581,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.0698721 = fieldWeight in 4581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.015625 = fieldNorm(doc=4581)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    In "Organizing Information" various fundamental organizational schemes are compared. These include hierarchical, relational, hypertext, and random access models. Each is described initially and then expanded an by listing advantages and disadvantages. This comparative format-not found elsewhere in the book-improves access to the subject and overall understanding. The author then affords considerable space to Boolean searching in the chapter "Retrieving Information." Throughout this chapter, the intricacies and problems of pattern matching and relevance are highlighted. The author elucidates the fact that document retrieval by simple pattern matching is not the same as problem solving. Therefore, "always know the nature of the problem you are trying to solve" (p. 56). This chapter is one of the more important ones in the book, covering a large topic swiftly and concisely. Chapters 5 through 11 then delve deeper into various specific issues of information systems. The chapters an securing and concealing information are exceptionally good. Without mentioning specific technologies, Mr. Ratzan is able to clearly present fundamental aspects of information security. Principles of backup security, password management, and encryption are also discussed in some detail. The latter is illustrated with some fascinating examples, from the Navajo Code Talkers to invisible ink and others. The chapters an measuring, counting, and numbering information complement each other well. Some of the more math-centric discussions and examples are found here. "Measuring Information" begins with a brief overview of bibliometrics and then moves quickly through Lotka's law, Zipf's law, and Bradford's law. For an LIS student, exposure to these topics is invaluable. Baseball statistics and web metrics are used for illustration purposes towards the end. In "counting Information," counting devices and methods are first presented, followed by discussion of the Fibonacci sequence and golden ratio. This relatively long chapter ends with examples of the tower of Hanoi, the changes of winning the lottery, and poker odds. The bulk of "Numbering Information" centers an prime numbers and pi. This chapter reads more like something out of an arithmetic book and seems somewhat extraneous here. Three specific types of information systems are presented in the second half of the book, each afforded its own chapter. These examples are universal as not to become dated or irrelevant over time. "The Computer as an Information System" is relatively short and focuses an bits, bytes, and data compression. Considering the Internet as an information system-chapter 13-is an interesting illustration. It brings up issues of IP addressing and the "privilege-vs.-right" access issue. We are reminded that the distinction between information rights and privileges is often unclear. A highlight of this chapter is the discussion of metaphors people use to describe the Internet, derived from the author's own research. He has found that people have varying mental models of the Internet, potentially affecting its perception and subsequent use.
  8. Lazar, J.: Web usability : a user-centered design approach (2006) 0.00
    0.0028802026 = product of:
      0.005760405 = sum of:
        0.005760405 = product of:
          0.01152081 = sum of:
            0.01152081 = weight(_text_:data in 340) [ClassicSimilarity], result of:
              0.01152081 = score(doc=340,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.0698721 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.015625 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Besides the major well-known software applications such as FrontPage and Dreamweaver (pp. 191-194), many useful software tools can be adopted to assist and accelerate the Web-development process, resulting in improvement of the productivity of the Web industry. Web Usability mentions such tools as the "code validator" (p. 189) to identify problematic areas of the handwritten code against spelling and usage, the tool available at a given URL address to convert portable document format (PDF) files into hypertext markup language (HTML) files (p. 201), WEBXACT, WebSAT, A-Prompt, Dottie, InFocus, and RAMP (pp. 226-227) to automate usability testing, and ClickTracks, NetTracker, WebTrends, and Spotfire (p. 263) to summarize Web-usage data and analyze the trends. Thus, Web developers are able to find these tools and benefit from them. Other strengths of the book include the layout of each page, which has a wide margin in which readers may easily place notes, and the fact that the book is easy to read and understand. Although there are many strengths in this book, a few weaknesses are evident. All chapter wrap-ups should have an identical layout. Without numbering for sections and subsections, it is very likely that readers will lose sense of where they are in the overall information architecture of the book. At present, the only solution is to frequently refer to the table of contents to confirm the location. The hands-on example on p. 39 would be better placed in chap. 4 because it focuses on a requirements gathering method, the interview. There are two similar phrases, namely "user population" and "user group," that are used widely in this book. User population is composed of user groups; however, they are not strictly used in this book. The section title "Using a Search Engine" (p. 244) should be on the same level as that of the section "Linking to a URL," and not as that of the section entitled "Marketing: Bringing Users to Your Web Site," according to what the author argued at the top of p. 236.
  9. Rogers, R.: Information politics on the Web (2004) 0.00
    0.0028802026 = product of:
      0.005760405 = sum of:
        0.005760405 = product of:
          0.01152081 = sum of:
            0.01152081 = weight(_text_:data in 442) [ClassicSimilarity], result of:
              0.01152081 = score(doc=442,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.0698721 = fieldWeight in 442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.015625 = fieldNorm(doc=442)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Chapter 2 examines the politics of information retrieval in the context of collaborative filtering techniques. Rogers begins by discussing the underpinnings of modern search engine design by examining medieval practices of knowledge seeking, following up with a critique of the collaborative filtering techniques. Rogers's major contention is that collaborative filtering rids us of user idiosyncrasies as search query strings, preferences, and recommendations are shared among users and without much care for the differences among them, both in terms of their innate characteristics and also their search goals. To illustrate Rogers' critiques of collaborative filtering, he describes an information searching experiment that he conducted with students at University of Vienna and University of Amsterdam. Students were asked to search for information on Viagra. As one can imagine, depending on a number of issues, not the least of which is what sources did one extract information from, a student would find different accounts of reality about Viagra, everything from a medical drug to a black-market drug ideal for underground trade. Rogers described how information on the Web differed from official accounts for certain events. The information on the Web served as an alternative reality. Chapter 3 describes the Web as a dynamic debate-mapping tool, a political instrument. Rogers introduces the "Issue Barometer," an information instrument that measures the social pressure on a topic being debated by analyzing data available from the Web. Measures used by the Issue Barometer include temperature of the issue (cold to hot), activity level of the debate (mild to intense), and territorialization (one country to many countries). The Issues Barometer is applied to an illustrative case of the public debate surrounding food safety in the Netherlands in 2001. Chapter 4 introduces "The Web Issue Index," which provides an indication of leading societal issues discussed on the Web. The empirical research on the Web Issues Index was conducted on the Genoa G8 Summit in 1999 and the anti-globalization movement. Rogers focus here was to examine the changing nature of prominent issues over time, i.e., how issues gained and lost attention and traction over time.
  10. Fuller, M.: Media ecologies : materialist energies in art and technoculture (2005) 0.00
    0.0028802026 = product of:
      0.005760405 = sum of:
        0.005760405 = product of:
          0.01152081 = sum of:
            0.01152081 = weight(_text_:data in 469) [ClassicSimilarity], result of:
              0.01152081 = score(doc=469,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.0698721 = fieldWeight in 469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.015625 = fieldNorm(doc=469)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Moving on to Web pages-Heath Bunting's cctv-world wide watch, where users watching four Webcams are encouraged to report crimes on an HTML form, which is then sent to the nearest police station-Fuller shows how cultural and technological components mesh uneasily in the project. Fuller argues that the "meme" (a kind of replicator that mutates as it passes from person to person or media to media, and works in combination with its immediate environment) or "bit" of identity constitutes a problem for surveillance. Packets of information-often the most common "meme" in Web technology-is, for Fuller, the standard object around which an ecology gets built. Networks check packets as they pass isolating passwords, URLS, credit data, and items of interest. The packet is the threshold of operations. The meme's "monitorability" enables not only dissemination through the network, but also its control. Memes, or what Fuller calls "flecks of identity" are referents in the flows of information-they "locate" and "situate" a user. Fuller's work is full of rich insights, especially into the ways in which forces of power operate within media ecologies. Even when the material/technological object, such as the camera or the Webcam turns in on itself, it is situated within a series of interrelated forces, some of which are antagonistic to the object. This insight-that contemporary media technology works within a field of antagonistic forces too-is Fuller's major contribution. Fuller is alert also to the potential within such force fields for subversion. Pirate radio and phreaking, therefore, emblematize how media ecologies create the context, possibility, and even modalities of political and social protest. Unfortunately, Fuller's style is a shade too digressive and aleatory for us to discover these insights. In his eagerness to incorporate as many theorists and philosophers of media/technology-he moves from Nietzsche to Susan Blackmore, sometimes within the space of a single paragraph-Fuller often takes a long time to get to his contribution to the debate or analysis. The problem, therefore, is mainly with style rather than content, and the arguments would have been perfectly fine if they had been couched in easier forms."
  11. Theorizing digital cultural heritage : a critical discourse (2005) 0.00
    0.0028802026 = product of:
      0.005760405 = sum of:
        0.005760405 = product of:
          0.01152081 = sum of:
            0.01152081 = weight(_text_:data in 1929) [ClassicSimilarity], result of:
              0.01152081 = score(doc=1929,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.0698721 = fieldWeight in 1929, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1929)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    The major strength of Theorizing Digital Cultural Heritage: A Critical Discourse is the balance of theory and practice achieved by its authors through the inclusion of discussion on digital culture exhibits and programs. By describing the work being done at diverse cultural institutions life is given to theoretical discussions. By relating theory to practice, the work becomes accessible to a broader range of readers. Further, these essays provide many examples of how libraries and museums could partner with each other in the realm of digital culture. The field of museum studies is dealing with the same issues as information and library science with regards to data organization, user behavior, object classification, and documentation schemas. Also, the emphasis on the users of digital cultural heritage and how individuals make meaningful connections with art, history, and geography is another asset of the book. Each chapter is well researched resulting in helpful and extensive bibliographies on various aspects of digital culture. Overall, the work is rich in discussion, description and illustrative examples that cover the subject of digital cultural heritage in terms of depth and breadth. The primary weakness of the title is on the focus on museum studies in the discourse on digital cultural heritage. There is much to be shared and discovered across other cultural institutions such as libraries and local historical societies and a more interdisciplinary approach to the essays included would have captured this. The overwhelming emphasis on museums, unfortunately, may cause some researching and studying digital cultural heritage from another perspective to overlook this work; thereby further dividing the efforts and communication of knowledge in this area. This work is highly recommended for collections on museum studies, cultural heritage, art history and documentation, library and information science, and archival science. This work would be most useful to educators and researchers interested in a theoretical understanding of cultural institutions and user interactions in view of the social and political impact of the evolving digital state of cultural heritage rather than in the specific technologies and specific user studies on the digital cultural heritage. Theorizing Digital Cultural Heritage is an insightful work that will encourage further discourse and research."

Languages

Types

  • a 5517
  • m 399
  • el 337
  • s 247
  • r 40
  • b 31
  • x 19
  • n 11
  • p 8
  • i 6
  • ? 4
  • l 3
  • d 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications