Search (23 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.05
    0.050061855 = product of:
      0.10012371 = sum of:
        0.10012371 = sum of:
          0.043561947 = weight(_text_:systems in 266) [ClassicSimilarity], result of:
            0.043561947 = score(doc=266,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.2716328 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
          0.056561764 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.056561764 = score(doc=266,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.30952093 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=266)
      0.5 = coord(1/2)
    
    Abstract
    This paper reviews the enormous changes in cataloging and classification reflected in the literature of 2003 and 2004, and discusses major themes and issues. Traditional cataloging and classification tools have been re-vamped and new resources have emerged. Most notable themes are: the continuing influence of the Functional Requirements for Bibliographic Control (FRBR); the struggle to understand the ever-broadening concept of an "information entity"; steady developments in metadata-encoding standards; and the globalization of information systems, including multilinguistic challenges.
    Date
    10. 9.2000 17:38:22
  2. Genereux, C.: Building connections : a review of the serials literature 2004 through 2005 (2007) 0.04
    0.044312872 = product of:
      0.088625744 = sum of:
        0.088625744 = sum of:
          0.04620442 = weight(_text_:systems in 2548) [ClassicSimilarity], result of:
            0.04620442 = score(doc=2548,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.28811008 = fieldWeight in 2548, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.046875 = fieldNorm(doc=2548)
          0.042421322 = weight(_text_:22 in 2548) [ClassicSimilarity], result of:
            0.042421322 = score(doc=2548,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.23214069 = fieldWeight in 2548, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2548)
      0.5 = coord(1/2)
    
    Abstract
    This review of 2004 and 2005 serials literature covers the themes of cost, management, and access. Interwoven through the serials literature of these two years are the importance of collaboration, communication, and linkages between scholars, publishers, subscription agents and other intermediaries, and librarians. The emphasis in the literature is on electronic serials and their impact on publishing, libraries, and vendors. In response to the crisis of escalating journal prices and libraries' dissatisfaction with the Big Deal licensing agreements, Open Access journals and publishing models were promoted. Libraries subscribed to or licensed increasing numbers of electronic serials. As a result, libraries sought ways to better manage licensing and subscription data (not handled by traditional integrated library systems) by implementing electronic resources management systems. In order to provide users with better, faster, and more current information on and access to electronic serials, libraries implemented tools and services to provide A-Z title lists, title by title coverage data, MARC records, and OpenURL link resolvers.
    Date
    10. 9.2000 17:38:22
  3. Corbett, L.E.: Serials: review of the literature 2000-2003 (2006) 0.03
    0.03128866 = product of:
      0.06257732 = sum of:
        0.06257732 = sum of:
          0.027226217 = weight(_text_:systems in 1088) [ClassicSimilarity], result of:
            0.027226217 = score(doc=1088,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.1697705 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
          0.0353511 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.0353511 = score(doc=1088,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
      0.5 = coord(1/2)
    
    Abstract
    The topic of electronic journals (e-journals) dominated the serials literature from 2000 to 2003. This review is limited to the events and issues within the broad topics of cost, management, and archiving. Coverage of cost includes such initiatives as PEAK, JACC, BioMed Central, SPARC, open access, the "Big Deal," and "going e-only." Librarians combated the continued price increase trend for journals, fueled in part by publisher mergers, with the economies found with bundled packages and consortial subscriptions. Serials management topics include usage statistics; core title lists; staffing needs; the "A-Z list" and other services from such companies as Serials Solutions; "deep linking"; link resolvers such as SFX; development of standards or guidelines, such as COUNTER and ERMI; tracking of license terms; vendor mergers; and the demise of integrated library systems and a subscription agent's bankruptcy. Librarians archived print volumes in storage facilities due to space shortages. Librarians and publishers struggled with electronic archiving concepts, discussing questions of who, where, and how. Projects such as LOCKSS tested potential solutions, but missing online content due to the Tasini court case and retractions posed more archiving difficulties. The serials literature captured much of the upheaval resulting from the rapid pace of changes, many linked to the advent of e-journals.
    Date
    10. 9.2000 17:38:22
  4. Enser, P.G.B.: Visual image retrieval (2008) 0.03
    0.028280882 = product of:
      0.056561764 = sum of:
        0.056561764 = product of:
          0.11312353 = sum of:
            0.11312353 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.11312353 = score(doc=3281,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:01:26
  5. Morris, S.A.: Mapping research specialties (2008) 0.03
    0.028280882 = product of:
      0.056561764 = sum of:
        0.056561764 = product of:
          0.11312353 = sum of:
            0.11312353 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.11312353 = score(doc=3962,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 9:30:22
  6. Fallis, D.: Social epistemology and information science (2006) 0.03
    0.028280882 = product of:
      0.056561764 = sum of:
        0.056561764 = product of:
          0.11312353 = sum of:
            0.11312353 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.11312353 = score(doc=4368,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:22:28
  7. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.028280882 = product of:
      0.056561764 = sum of:
        0.056561764 = product of:
          0.11312353 = sum of:
            0.11312353 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11312353 = score(doc=6091,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  8. Hunter, J.: Collaborative semantic tagging and annotation systems (2009) 0.02
    0.021780973 = product of:
      0.043561947 = sum of:
        0.043561947 = product of:
          0.08712389 = sum of:
            0.08712389 = weight(_text_:systems in 7382) [ClassicSimilarity], result of:
              0.08712389 = score(doc=7382,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5432656 = fieldWeight in 7382, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.125 = fieldNorm(doc=7382)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Vakkari, P.: Task-based information searching (2002) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 4288) [ClassicSimilarity], result of:
              0.056588627 = score(doc=4288,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 4288, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4288)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rationale for using information systems is to find information that helps us in our daily activities, be they tasks or interests. Systems are expected to support us in searching for and identifying useful information. Although the activities and tasks performed by humans generate information needs and searching, they have attracted little attention in studies of information searching. Such studies have concentrated an search tasks rather than the activities that trigger them. It is obvious that our understanding of information searching is only partial, if we are not able to connect aspects of searching to the related task. The expected contribution of information to the task is reflected in relevance assessments of the information items found, and in the search tactics and use of the system in general. Taking the task into account seems to be a necessary condition for understanding and explaining information searching, and, by extension, for effective systems design.
  10. Kim, K.-S.: Recent work in cataloging and classification, 2000-2002 (2003) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 152) [ClassicSimilarity], result of:
              0.056561764 = score(doc=152,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=152)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  11. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.056561764 = score(doc=249,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  12. Nielsen, M.L.: Thesaurus construction : key issues and selected readings (2004) 0.01
    0.012372886 = product of:
      0.024745772 = sum of:
        0.024745772 = product of:
          0.049491543 = sum of:
            0.049491543 = weight(_text_:22 in 5006) [ClassicSimilarity], result of:
              0.049491543 = score(doc=5006,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2708308 = fieldWeight in 5006, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5006)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 5.2006 20:06:22
  13. Weiss, A.K.; Carstens, T.V.: ¬The year's work in cataloging, 1999 (2001) 0.01
    0.012372886 = product of:
      0.024745772 = sum of:
        0.024745772 = product of:
          0.049491543 = sum of:
            0.049491543 = weight(_text_:22 in 6084) [ClassicSimilarity], result of:
              0.049491543 = score(doc=6084,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2708308 = fieldWeight in 6084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  14. Chowdhury, G.G.: Natural language processing (2002) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 4284) [ClassicSimilarity], result of:
              0.04620442 = score(doc=4284,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 4284, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
  15. Downie, J.S.: Music information retrieval (2002) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 4287) [ClassicSimilarity], result of:
              0.03267146 = score(doc=4287,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 4287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key singing, corrects your request, and promptly suggests to you that "Camptown Races" is the cause of your irritation. You confirm the computer's suggestion by listening to one of the many MP3 files it has found. Satisfied, you kindly decline the offer to retrieve all extant versions of the song, including a recently released Italian rap rendition and an orchestral score featuring a bagpipe duet. Does such a system exist today? No. Will it in the future? Yes. Will such a system be easy to produce? Most decidedly not. Myriad difficulties remain to be overcome before the creation, deployment, and evaluation of robust, large-scale, and content-based Music Information Retrieval (MIR) systems become reality. The dizzyingly complex interaction of music's pitch, temporal, harmonic, timbral, editorial, textual, and bibliographic "facets," for example, demonstrates just one of MIR's perplexing problems. The choice of music representation-whether symbol-based, audio-based, or both-further compounds matters, as each choice determines bandwidth, computation, storage, retrieval, and interface requirements and capabilities.
  16. Marsh, S.; Dibben, M.R.: ¬The role of trust in information science and technology (2002) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 4289) [ClassicSimilarity], result of:
              0.03267146 = score(doc=4289,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 4289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This chapter discusses the notion of trust as it relates to information science and technology, specifically user interfaces, autonomous agents, and information systems. We first present an in-depth discussion of the concept of trust in and of itself, moving an to applications and considerations of trust in relation to information technologies. We consider trust from a "soft" perspective-thus, although security concepts such as cryptography, virus protection, authentication, and so forth reinforce (or damage) the feelings of trust we may have in a system, they are not themselves constitutive of "trust." We discuss information technology from a human-centric viewpoint, where trust is a less well-structured but much more powerful phenomenon. With the proliferation of electronic commerce (e-commerce) and the World Wide Web (WWW, or Web), much has been made of the ability of individuals to explore the vast quantities of information available to them, to purchase goods (as diverse as vacations and cars) online, and to publish information an their personal Web sites.
  17. Solomon, S.: Discovering information in context (2002) 0.01
    0.008167865 = product of:
      0.01633573 = sum of:
        0.01633573 = product of:
          0.03267146 = sum of:
            0.03267146 = weight(_text_:systems in 4294) [ClassicSimilarity], result of:
              0.03267146 = score(doc=4294,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2037246 = fieldWeight in 4294, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4294)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This chapter has three purposes: to illuminate the ways in which people discover, shape, or create information as part of their lives and work; to consider how the resources and rules of people's situations facilitate or limit discovery of information; and to introduce the idea of a sociotechnical systems design science that is founded in part an understanding the discovery of information in context. In addressing these purposes the chapter focuses an both theoretical and research works in information studies and related fields that shed light on information as something that is embedded in the fabric of people's lives and work. Thus, the discovery of information view presented here characterizes information as being constructed through involvement in life's activities, problems, tasks, and social and technological structures, as opposed to being independent and context free. Given this process view, discovering information entails engagement, reflection, learning, and action-all the behaviors that research subjects often speak of as making sense-above and beyond the traditional focus of the information studies field: seeking without consideration of connections across time.
  18. Williams, P.; Nicholas, D.; Gunter, B.: E-learning: what the literature tells us about distance education : an overview (2005) 0.01
    0.007700737 = product of:
      0.015401474 = sum of:
        0.015401474 = product of:
          0.030802948 = sum of:
            0.030802948 = weight(_text_:systems in 662) [ClassicSimilarity], result of:
              0.030802948 = score(doc=662,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.19207339 = fieldWeight in 662, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=662)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The CIBER group at University College London are currently evaluating a distance education initiative funded by the Department of Health, providing in-service training to NHS staff via DiTV and satellite to PC systems. This paper aims to provide the context for the project by outlining a short history of distance education, describing the media used in providing remote education, and to review research literature on achievement, attitude, barriers to learning and learner characteristics. Design/methodology/approach - Literature review, with particular, although not exclusive, emphasis on health. Findings - The literature shows little difference in achievement between distance and traditional learners, although using a variety of media, both to deliver pedagogic material and to facilitate communication, does seem to enhance learning. Similarly, attitudinal studies appear to show that the greater number of channels offered, the more positive students are about their experiences. With regard to barriers to completing courses, the main problems appear to be family or work obligations. Research limitations/implications - The research work this review seeks to consider is examining "on-demand" showing of filmed lectures via a DiTV system. The literature on DiTV applications research, however, is dominated by studies of simultaneous viewing by on-site and remote students, rather than "on-demand". Practical implications - Current research being carried out by the authors should enhance the findings accrued by the literature, by exploring the impact of "on-demand" video material, delivered by DiTV - something no previous research appears to have examined. Originality/value - Discusses different electronic systems and their exploitation for distance education, and cross-references these with several aspects evaluated in the literature: achievement, attitude, barriers to take-up or success, to provide a holistic picture hitherto missing from the literature.
  19. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.0067381454 = product of:
      0.013476291 = sum of:
        0.013476291 = product of:
          0.026952581 = sum of:
            0.026952581 = weight(_text_:systems in 4276) [ClassicSimilarity], result of:
              0.026952581 = score(doc=4276,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.16806422 = fieldWeight in 4276, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  20. Dumais, S.T.: Latent semantic analysis (2003) 0.01
    0.0057755527 = product of:
      0.011551105 = sum of:
        0.011551105 = product of:
          0.02310221 = sum of:
            0.02310221 = weight(_text_:systems in 2462) [ClassicSimilarity], result of:
              0.02310221 = score(doc=2462,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.14405504 = fieldWeight in 2462, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2462)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.
    With the advent of large-scale collections of full text, statistical approaches are being used more and more to analyze the relationships among terms and documents. LSA takes this approach. LSA induces knowledge about the meanings of documents and words by analyzing large collections of texts. The approach simultaneously models the relationships among documents based an their constituent words, and the relationships between words based an their occurrence in documents. By using fewer dimensions for representation than there are unique words, LSA induces similarities among terms that are useful in solving the information retrieval problems described earlier. LSA is a fully automatic statistical approach to extracting relations among words by means of their contexts of use in documents, passages, or sentences. It makes no use of natural language processing techniques for analyzing morphological, syntactic, or semantic relations. Nor does it use humanly constructed resources like dictionaries, thesauri, lexical reference systems (e.g., WordNet), semantic networks, or other knowledge representations. Its only input is large amounts of texts. LSA is an unsupervised learning technique. It starts with a large collection of texts, builds a term-document matrix, and tries to uncover some similarity structures that are useful for information retrieval and related text-analysis problems. Several recent ARIST chapters have focused an text mining and discovery (Benoit, 2002; Solomon, 2002; Trybula, 2000). These chapters provide complementary coverage of the field of text analysis.