Search (27 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Literaturübersicht"
  • × year_i:[2000 TO 2010}
  1. Rogers, Y.: New theoretical approaches for human-computer interaction (2003) 0.05
    0.04719945 = product of:
      0.1258652 = sum of:
        0.052558206 = weight(_text_:supported in 4270) [ClassicSimilarity], result of:
          0.052558206 = score(doc=4270,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.22901614 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
        0.05311966 = weight(_text_:cooperative in 4270) [ClassicSimilarity], result of:
          0.05311966 = score(doc=4270,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.23023613 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
        0.020187335 = weight(_text_:work in 4270) [ClassicSimilarity], result of:
          0.020187335 = score(doc=4270,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.14193363 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
      0.375 = coord(3/8)
    
    Abstract
    A problem with allowing a field to expand eclectically is that it can easily lose coherence. No one really knows what its purpose is anymore or what criteria to use in assessing its contribution and value to both knowledge and practice. For example, among the many new approaches, ideas, methods, and goals now being proposed, how do we know which are acceptable, reliable, useful, and generalizable? Moreover, how do researchers and designers know which of the many tools and techniques to use when doing design and research? To be able to address these concerns, a young field in a state of flux (as is HCI) needs to take stock and begin to reflect an the changes that are happening. The purpose of this chapter is to assess and reflect an the role of theory in contemporary HCI and the extent to which it is used in design practice. Over the last ten years, a range of new theories has been imported into the field. A key question is whether such attempts have been productive in terms of "knowledge transfer." Here knowledge transfer means the translation of research findings (e.g., theory, empirical results, descriptive accounts, cognitive models) from one discipline (e.g., cognitive psychology, sociology) into another (e.g., human-computer interaction, computer supported cooperative work).
  2. Kim, K.-S.: Recent work in cataloging and classification, 2000-2002 (2003) 0.02
    0.016785828 = product of:
      0.06714331 = sum of:
        0.04614248 = weight(_text_:work in 152) [ClassicSimilarity], result of:
          0.04614248 = score(doc=152,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=152)
        0.021000832 = product of:
          0.042001665 = sum of:
            0.042001665 = weight(_text_:22 in 152) [ClassicSimilarity], result of:
              0.042001665 = score(doc=152,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.30952093 = fieldWeight in 152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=152)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    10. 9.2000 17:38:22
  3. Weiss, A.K.; Carstens, T.V.: ¬The year's work in cataloging, 1999 (2001) 0.01
    0.0146876 = product of:
      0.0587504 = sum of:
        0.04037467 = weight(_text_:work in 6084) [ClassicSimilarity], result of:
          0.04037467 = score(doc=6084,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 6084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6084)
        0.018375728 = product of:
          0.036751457 = sum of:
            0.036751457 = weight(_text_:22 in 6084) [ClassicSimilarity], result of:
              0.036751457 = score(doc=6084,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.2708308 = fieldWeight in 6084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6084)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    10. 9.2000 17:38:22
  4. Cornelius, I.: Theorizing information for information science (2002) 0.01
    0.006676334 = product of:
      0.05341067 = sum of:
        0.05341067 = weight(_text_:work in 4244) [ClassicSimilarity], result of:
          0.05341067 = score(doc=4244,freq=14.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.37552112 = fieldWeight in 4244, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4244)
      0.125 = coord(1/8)
    
    Abstract
    Does information science have a theory of information? There seems to be a tendency within information science to seek a theory of information, but the search is apparently unproductive (Hjoerland, 1998; Saracevic, 1999). This review brings together work from inside and outside the field of information science, showing that other perspectives an information theory could be of assistance. Constructivist claims that emphasize the uniqueness of the individual experience of information, maintaining that there is no information independent of our social practices (Cornelius, 1996a), are also mentioned. Such a position would be echoed in a symbolic interactionist approach. Conventionally, the history of attempts to develop a theory of information date from the publication of Claude Shannon's work in 1948, and his joint publication of that work with an essay by Warren Weaver in 1949 (Shannon & Weaver, 1949/1963). Information science found itself alongside many other disciplines attempting to develop a theory of information (Machlup & Mansfield, 1983). From Weaver's essay stems the claim that the basic concepts of Shannon's mathematical theory of communication, which Shannon later referred to as a theory of information, can be applied in disciplines outside electrical engineering, even in the social sciences.
    Shannon provides a model whereby an information source selects a desired message, out of a set of possible messages, that is then formed into a signal. The signal is sent over the communication channel to a receiver, which then transforms the signal back to a message that is relayed to its destination (Shannon & Weaver, 1949/1963, p. 7). Problems connected with this model have remained with us. Some of the concepts are ambiguous; the identification of information with a process has spancelled the debate; the problems of measuring the amount of information, the relation of information to meaning, and questions about the truth value of information have remained. Balancing attention between the process and the act of receiving information, and deterrnining the character of the receiver, has also been the focus of work and debate. Information science has mined work from other disciplines involving information theory and has also produced its own theory. The desire for theory remains (Hjorland, 1998; Saracevic, 1999), but what theory will deliver is unclear. The distinction between data and information, or communication and information, is not of concern here. The convention that data, at some point of use, become information, and that information is transferred in a process of communication suffices for this discussion. Substitution of any of these terms is not a problem. More problematic is the relationship between information and knowledge. It seems accepted that at some point the data by perception, or selection, become information, which feeds and alters knowledge structures in a human recipient. What that process of alteration is, and its implications, remain problematic. This review considers the following questions: 1. What can be gleaned from the history of reviews of information in information science? 2. What current maps, guides, and surveys are available to elaborate our understanding of the issues? 3. Is there a parallel development of work outside information science an information theory of use to us? 4. Is there a dominant view of information within information science? 5. What can we say about issues like measurement, meaning, and misinformation? 6. Is there other current work of relevance that can assist attempts, in information science, to develop a theory of information?
  5. Solomon, S.: Discovering information in context (2002) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 4294) [ClassicSimilarity], result of:
          0.048941493 = score(doc=4294,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 4294, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4294)
      0.125 = coord(1/8)
    
    Abstract
    This chapter has three purposes: to illuminate the ways in which people discover, shape, or create information as part of their lives and work; to consider how the resources and rules of people's situations facilitate or limit discovery of information; and to introduce the idea of a sociotechnical systems design science that is founded in part an understanding the discovery of information in context. In addressing these purposes the chapter focuses an both theoretical and research works in information studies and related fields that shed light on information as something that is embedded in the fabric of people's lives and work. Thus, the discovery of information view presented here characterizes information as being constructed through involvement in life's activities, problems, tasks, and social and technological structures, as opposed to being independent and context free. Given this process view, discovering information entails engagement, reflection, learning, and action-all the behaviors that research subjects often speak of as making sense-above and beyond the traditional focus of the information studies field: seeking without consideration of connections across time.
  6. Enser, P.G.B.: Visual image retrieval (2008) 0.01
    0.005250208 = product of:
      0.042001665 = sum of:
        0.042001665 = product of:
          0.08400333 = sum of:
            0.08400333 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.08400333 = score(doc=3281,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 1.2012 13:01:26
  7. Morris, S.A.: Mapping research specialties (2008) 0.01
    0.005250208 = product of:
      0.042001665 = sum of:
        0.042001665 = product of:
          0.08400333 = sum of:
            0.08400333 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.08400333 = score(doc=3962,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    13. 7.2008 9:30:22
  8. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.005250208 = product of:
      0.042001665 = sum of:
        0.042001665 = product of:
          0.08400333 = sum of:
            0.08400333 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.08400333 = score(doc=4368,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    13. 7.2008 19:22:28
  9. Nicolaisen, J.: Citation analysis (2007) 0.01
    0.005250208 = product of:
      0.042001665 = sum of:
        0.042001665 = product of:
          0.08400333 = sum of:
            0.08400333 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.08400333 = score(doc=6091,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    13. 7.2008 19:53:22
  10. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.00
    0.00476881 = product of:
      0.03815048 = sum of:
        0.03815048 = weight(_text_:work in 2467) [ClassicSimilarity], result of:
          0.03815048 = score(doc=2467,freq=14.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.26822937 = fieldWeight in 2467, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.125 = coord(1/8)
    
    Abstract
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  11. Kling, R.: ¬The Internet and unrefereed scholarly publishing (2003) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 4272) [ClassicSimilarity], result of:
          0.034606863 = score(doc=4272,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 4272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4272)
      0.125 = coord(1/8)
    
    Abstract
    In the early 1990s, much of the enthusiasm for the use of electronic media to enhance scholarly communication focused an electronic journals, especially electronic-only, (pure) e journals (see for example, Peek & Newby's [1996] anthology). Much of the systematic research an the use of electronic media to enhance scholarly communication also focused an electronic journals. However, by the late 1990s, numerous scientific publishers had transformed their paper journals (p journals) into paper and electronic journals (p-e journals) and sold them via subscription models that did not provide the significant costs savings, speed of access, or breadth of audience that pure e -journal advocates had expected (Okerson, 1996). In 2001, a group of senior life scientists led a campaign to have publishers make their journals freely available online six months after publication (Russo, 2001). The campaign leaders, using the name "Public Library of Science," asked scientists to boycott journals that did not comply with these demands for open access. Although the proposal was discussed in scientific magazines and conferences, it apparently did not persuade any journal publishers to comply (Young, 2002). Most productive scientists, who work for major universities and research institutes
  12. Benoit, G.: Data mining (2002) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 4296) [ClassicSimilarity], result of:
          0.034606863 = score(doc=4296,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 4296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4296)
      0.125 = coord(1/8)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
  13. Williams, P.; Nicholas, D.; Gunter, B.: E-learning: what the literature tells us about distance education : an overview (2005) 0.00
    0.0040784576 = product of:
      0.03262766 = sum of:
        0.03262766 = weight(_text_:work in 662) [ClassicSimilarity], result of:
          0.03262766 = score(doc=662,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2293994 = fieldWeight in 662, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=662)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The CIBER group at University College London are currently evaluating a distance education initiative funded by the Department of Health, providing in-service training to NHS staff via DiTV and satellite to PC systems. This paper aims to provide the context for the project by outlining a short history of distance education, describing the media used in providing remote education, and to review research literature on achievement, attitude, barriers to learning and learner characteristics. Design/methodology/approach - Literature review, with particular, although not exclusive, emphasis on health. Findings - The literature shows little difference in achievement between distance and traditional learners, although using a variety of media, both to deliver pedagogic material and to facilitate communication, does seem to enhance learning. Similarly, attitudinal studies appear to show that the greater number of channels offered, the more positive students are about their experiences. With regard to barriers to completing courses, the main problems appear to be family or work obligations. Research limitations/implications - The research work this review seeks to consider is examining "on-demand" showing of filmed lectures via a DiTV system. The literature on DiTV applications research, however, is dominated by studies of simultaneous viewing by on-site and remote students, rather than "on-demand". Practical implications - Current research being carried out by the authors should enhance the findings accrued by the literature, by exploring the impact of "on-demand" video material, delivered by DiTV - something no previous research appears to have examined. Originality/value - Discusses different electronic systems and their exploitation for distance education, and cross-references these with several aspects evaluated in the literature: achievement, attitude, barriers to take-up or success, to provide a holistic picture hitherto missing from the literature.
  14. Liu, X.; Croft, W.B.: Statistical language modeling for information retrieval (2004) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 4277) [ClassicSimilarity], result of:
          0.028839052 = score(doc=4277,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 4277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4277)
      0.125 = coord(1/8)
    
    Abstract
    This chapter reviews research and applications in statistical language modeling for information retrieval (IR), which has emerged within the past several years as a new probabilistic framework for describing information retrieval processes. Generally speaking, statistical language modeling, or more simply language modeling (LM), involves estimating a probability distribution that captures statistical regularities of natural language use. Applied to information retrieval, language modeling refers to the problem of estimating the likelihood that a query and a document could have been generated by the same language model, given the language model of the document either with or without a language model of the query. The roots of statistical language modeling date to the beginning of the twentieth century when Markov tried to model letter sequences in works of Russian literature (Manning & Schütze, 1999). Zipf (1929, 1932, 1949, 1965) studied the statistical properties of text and discovered that the frequency of works decays as a Power function of each works rank. However, it was Shannon's (1951) work that inspired later research in this area. In 1951, eager to explore the applications of his newly founded information theory to human language, Shannon used a prediction game involving n-grams to investigate the information content of English text. He evaluated n-gram models' performance by comparing their crossentropy an texts with the true entropy estimated using predictions made by human subjects. For many years, statistical language models have been used primarily for automatic speech recognition. Since 1980, when the first significant language model was proposed (Rosenfeld, 2000), statistical language modeling has become a fundamental component of speech recognition, machine translation, and spelling correction.
  15. Fischer, K.S.: Critical views of LCSH, 1990-2001 : the third bibliographic essay (2005) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 5738) [ClassicSimilarity], result of:
          0.028839052 = score(doc=5738,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 5738, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5738)
      0.125 = coord(1/8)
    
    Abstract
    This classified critical bibliography continues the work initiated by Monika Kirtland and Pauline Cochrane, and furthered by Steven Blake Shubert. Kirtland and Cochrane published a bibliography surveying the literature critical of LCSH from 1944-1979 titled "Critical Views of LCSH - Library of Congress Subject Headings, A Bibliographic and Bibliometric Essay." Shubert analyzed another decade of literature in his article titled "Critical Views of LCSH-Ten Years Later: A Bibliographic Essay." This current bibliography compiles the next twelve years of critical literature from 1990-2001. Persistent concerns of the past fifty-seven years include inadequate syndetic structure, currency or bias of the headings, and lack of specificity in the subject heading list. New developments and research are in the areas of subdivisions, mapping, indexer inconsistency, and post-coordination. LCSH must become more flexible and easier to use in order to increase its scalability and interoperability as an online subject searching tool.
  16. Rader, H.B.: Information literacy 1973-2002 : a selected literature review (2002) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 43) [ClassicSimilarity], result of:
          0.028839052 = score(doc=43,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 43, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=43)
      0.125 = coord(1/8)
    
    Abstract
    More than 5000 publications related to library user instruction and information literacy have been published and reviewed in the past thirty years. New developments in education and technology during the last two decades have affected user instruction and have led to the emergence of information literacy. Based on needs related to the rapid development of information technology and the evolving information society, librarians have begun teaching information skills to all types of users to ensure that they gain information fluency so they can become productive and effective information users both in the education environment and in the work environment. The number of publications related to user instruction and information literacy, like the field itself, show phenomenal growth during the past three decades as demonstrated by the fact that in 1973 twenty-eight publications were reviewed, and in 2002 more than 300 publications dealing with the topic of information literacy will be issued. It is noteworthy that in the last decade there has been a tremendous growth in publications related to information literacy globally. During the 1970s, publications indicate that user instruction activities were of concern primarily to librarians in the United States, Canada, the United Kingdom, Australia, and New Zealand. At the present time, publications indicate a major concern with information literacy not only in the countries mentioned above but also in China, Germany, Mexico, Scandinavia, Singapore, South Africa, South America, Spain, and others. On an annual hasis, the majority of the publications have addressed information literacy in academic libraries (60 percent) followed by publications related to information literacy instruction in school media centers (20 percent).
  17. Gilliland-Swetland, A.: Electronic records management (2004) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 4280) [ClassicSimilarity], result of:
          0.02307124 = score(doc=4280,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 4280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=4280)
      0.125 = coord(1/8)
    
    Abstract
    What is an electronic record, how should it best be preserved and made available, and to what extent do traditional, paradigmatic archival precepts such as provenance, original order, and archival custody hold when managing it? Over more than four decades of work in the area of electronic records (formerly known as machine-readable records), theorists and researchers have offered answers to these questions-or at least devised approaches for trying to answer them. However, a set of fundamental questions about the nature of the record and the applicability of traditional archival theory still confronts researchers seeking to advance knowledge and development in this increasingly active, but contested, area of research. For example, which characteristics differentiate a record from other types of information objects (such as publications or raw research data)? Are these characteristics consistently present regardless of the medium of the record? Does the record always have to have a tangible form? How does the record manifest itself within different technological and procedural contexts, and in particular, how do we determine the parameters of electronic records created in relational, distributed, or dynamic environments that bear little resemblance an the surface to traditional paper-based environments? At the heart of electronic records research lies a dual concern with the nature of the record as a specific type of information object and the nature of legal and historical evidence in a digital world. Electronic records research is relevant to the agendas of many communities in addition to that of archivists. Its emphasis an accountability and an establishing trust in records, for example, addresses concerns that are central to both digital government and e-commerce. Research relating to electronic records is still relatively homogeneous in terms of scope, in that most major research initiatives have addressed various combinations of the following: theory building in terms of identifying the nature of the electronic record, developing alternative conceptual models, establishing the determinants of reliability and authenticity in active and preserved electronic records, identifying functional and metadata requirements for record keeping, developing and testing preservation
  18. Saracevic, T.: Relevance: a review of the literature and a framework for thinking on the notion in information science. Part II : nature and manifestations of relevance (2007) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 612) [ClassicSimilarity], result of:
          0.02307124 = score(doc=612,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=612)
      0.125 = coord(1/8)
    
    Abstract
    Relevance is a, if not even the, key notion in information science in general and information retrieval in particular. This two-part critical review traces and synthesizes the scholarship on relevance over the past 30 years and provides an updated framework within which the still widely dissonant ideas and works about relevance might be interpreted and related. It is a continuation and update of a similar review that appeared in 1975 under the same title, considered here as being Part I. The present review is organized into two parts: Part II addresses the questions related to nature and manifestations of relevance, and Part III addresses questions related to relevance behavior and effects. In Part II, the nature of relevance is discussed in terms of meaning ascribed to relevance, theories used or proposed, and models that have been developed. The manifestations of relevance are classified as to several kinds of relevance that form an interdependent system of relevances. In Part III, relevance behavior and effects are synthesized using experimental and observational works that incorporate data. In both parts, each section concludes with a summary that in effect provides an interpretation and synthesis of contemporary thinking on the topic treated or suggests hypotheses for future research. Analyses of some of the major trends that shape relevance work are offered in conclusions.
  19. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.00
    0.002625104 = product of:
      0.021000832 = sum of:
        0.021000832 = product of:
          0.042001665 = sum of:
            0.042001665 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.042001665 = score(doc=249,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    10. 9.2000 17:38:22
  20. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.00
    0.002625104 = product of:
      0.021000832 = sum of:
        0.021000832 = product of:
          0.042001665 = sum of:
            0.042001665 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.042001665 = score(doc=266,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.30952093 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    10. 9.2000 17:38:22