Search (34 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  1. Galloway, P.: Preservation of digital objects (2003) 0.08
    0.07660149 = product of:
      0.30640596 = sum of:
        0.30640596 = weight(_text_:objects in 4275) [ClassicSimilarity], result of:
          0.30640596 = score(doc=4275,freq=18.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.8808569 = fieldWeight in 4275, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
      0.25 = coord(1/4)
    
    Abstract
    The preservation of digital objects (defined here as objects in digital form that require a computer to support their existence and display) is obviously an important practical issue for the information professions, with its importance growing daily as more information objects are produced in, or converted to, digital form. Yakel's (2001) review of the field provided a much-needed introduction. At the same time, the complexity of new digital objects continues to increase, challenging existing preservation efforts (Lee, Skattery, Lu, Tang, & McCrary, 2002). The field of information science itself is beginning to pay some reflexive attention to the creation of fragile and unpreservable digital objects. But these concerns focus often an the practical problems of short-term repurposing of digital objects rather than actual preservation, by which I mean the activity of carrying digital objects from one software generation to another, undertaken for purposes beyond the original reasons for creating the objects. For preservation in this sense to be possible, information science as a discipline needs to be active in the formulation of, and advocacy for, national information policies. Such policies will need to challenge the predominant cultural expectation of planned obsolescence for information resources, and cultural artifacts in general.
  2. Olson, N.B.; Swanson, E.: ¬The year's work in nonbook processing (1989) 0.05
    0.05106766 = product of:
      0.20427065 = sum of:
        0.20427065 = weight(_text_:objects in 8330) [ClassicSimilarity], result of:
          0.20427065 = score(doc=8330,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.58723795 = fieldWeight in 8330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=8330)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the literature, published in 1988, covering the cataloguing of audio-visual materials including: computerised files; music and sound recordings; film and video; graphic materials and 3-dimensional objects; maps; and preservation.
  3. Enser, P.G.B.: Visual image retrieval (2008) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
          0.14187238 = score(doc=3281,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 3281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=3281)
      0.25 = coord(1/4)
    
    Date
    22. 1.2012 13:01:26
  4. Morris, S.A.: Mapping research specialties (2008) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
          0.14187238 = score(doc=3962,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 3962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=3962)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 9:30:22
  5. Fallis, D.: Social epistemology and information science (2006) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
          0.14187238 = score(doc=4368,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 4368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4368)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:22:28
  6. Nicolaisen, J.: Citation analysis (2007) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
          0.14187238 = score(doc=6091,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  7. Metz, A.: Community service : a bibliography (1996) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
          0.14187238 = score(doc=5341,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 5341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=5341)
      0.25 = coord(1/4)
    
    Date
    17.10.1996 14:22:33
  8. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
          0.14187238 = score(doc=334,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 334, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=334)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  9. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
          0.14187238 = score(doc=335,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=335)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  10. Warner, A.J.: Natural language processing (1987) 0.04
    0.035468094 = product of:
      0.14187238 = sum of:
        0.14187238 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
          0.14187238 = score(doc=337,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.61904186 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  11. Grudin, J.: Human-computer interaction (2011) 0.03
    0.03103458 = product of:
      0.12413832 = sum of:
        0.12413832 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
          0.12413832 = score(doc=1601,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.5416616 = fieldWeight in 1601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=1601)
      0.25 = coord(1/4)
    
    Date
    27.12.2014 18:54:22
  12. Rasmussen, E.M.: Indexing images (1997) 0.03
    0.030640597 = product of:
      0.122562386 = sum of:
        0.122562386 = weight(_text_:objects in 2215) [ClassicSimilarity], result of:
          0.122562386 = score(doc=2215,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.35234275 = fieldWeight in 2215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2215)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of methods available for accessing collections of digital images by means of manual and automatic indexing. Distinguishes between concept based indexing, in which images and the objects represented, are manually identified and described in terms of what they are and represent, and content based indexing, in which features of images (such as colours) are automatically identified and extracted. The main discussion is arranged in 6 sections: studies of image systems and their use; approaches to indexing images; image attributes; concept based indexing; content based indexing; and browsing in image retrieval. The performance of current image retrieval systems is largely untested and they still lack an extensive history and tradition of evaluation and standards for assessing performance. Concludes that there is a significant amount of research to be done before image retrieval systems can reach the state of development of text retrieval systems
  13. Davenport, E.; Hall, H.: Organizational Knowledge and Communities of Practice (2002) 0.03
    0.030640597 = product of:
      0.122562386 = sum of:
        0.122562386 = weight(_text_:objects in 4293) [ClassicSimilarity], result of:
          0.122562386 = score(doc=4293,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.35234275 = fieldWeight in 4293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4293)
      0.25 = coord(1/4)
    
    Abstract
    A community of practice has recently been defined as "a flexible group of professionals, informally bound by common interests, who interact through interdependent tasks guided by a common purpose thereby embodying a store of common knowledge" (Jubert, 1999, p. 166). The association of communities of practice with the production of collective knowledge has long been recognized, and they have been objects of study for a number of decades in the context of professional communication, particularly communication in science (Abbott, 1988; Bazerman & Paradis, 1991). Recently, however, they have been invoked in the domain of organization studies as sites where people learn and share insights. If, as Stinchcombe suggests, an organization is "a set of stable social relations, dehberately created, with the explicit intention of continuously accomplishing some specific goals or purposes" (Stinchcombe, 1965, p. 142), where does this "flexible" and "embodied" source of knowledge fit? Can communities of practice be harnessed, engineered, and managed like other organizational groups, or does their strength lie in the fact that they operate outside the stable and persistent social relations that characterize the organization?
  14. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.03
    0.030640597 = product of:
      0.122562386 = sum of:
        0.122562386 = weight(_text_:objects in 1978) [ClassicSimilarity], result of:
          0.122562386 = score(doc=1978,freq=8.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.35234275 = fieldWeight in 1978, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.25 = coord(1/4)
    
    Abstract
    This chapter examines the nature of semantic relations and their main applications in information science. The nature and types of semantic relations are discussed from the perspectives of linguistics and psychology. An overview of the semantic relations used in knowledge structures such as thesauri and ontologies is provided, as well as the main techniques used in the automatic extraction of semantic relations from text. The chapter then reviews the use of semantic relations in information extraction, information retrieval, question-answering, and automatic text summarization applications. Concepts and relations are the foundation of knowledge and thought. When we look at the world, we perceive not a mass of colors but objects to which we automatically assign category labels. Our perceptual system automatically segments the world into concepts and categories. Concepts are the building blocks of knowledge; relations act as the cement that links concepts into knowledge structures. We spend much of our lives identifying regular associations and relations between objects, events, and processes so that the world has an understandable structure and predictability. Our lives and work depend on the accuracy and richness of this knowledge structure and its web of relations. Relations are needed for reasoning and inferencing. Chaffin and Herrmann (1988b, p. 290) noted that "relations between ideas have long been viewed as basic to thought, language, comprehension, and memory." Aristotle's Metaphysics (Aristotle, 1961; McKeon, expounded on several types of relations. The majority of the 30 entries in a section of the Metaphysics known today as the Philosophical Lexicon referred to relations and attributes, including cause, part-whole, same and opposite, quality (i.e., attribute) and kind-of, and defined different types of each relation. Hume (1955) pointed out that there is a connection between successive ideas in our minds, even in our dreams, and that the introduction of an idea in our mind automatically recalls an associated idea. He argued that all the objects of human reasoning are divided into relations of ideas and matters of fact and that factual reasoning is founded on the cause-effect relation. His Treatise of Human Nature identified seven kinds of relations: resemblance, identity, relations of time and place, proportion in quantity or number, degrees in quality, contrariety, and causation. Mill (1974, pp. 989-1004) discoursed on several types of relations, claiming that all things are either feelings, substances, or attributes, and that attributes can be a quality (which belongs to one object) or a relation to other objects.
  15. Siqueira, J.; Martins, D.L.: Workflow models for aggregating cultural heritage data on the web : a systematic literature review (2022) 0.03
    0.02553383 = product of:
      0.10213532 = sum of:
        0.10213532 = weight(_text_:objects in 464) [ClassicSimilarity], result of:
          0.10213532 = score(doc=464,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.29361898 = fieldWeight in 464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=464)
      0.25 = coord(1/4)
    
    Abstract
    In recent years, different cultural institutions have made efforts to spread culture through the construction of a unique search interface that integrates their digital objects and facilitates data retrieval for lay users. However, integrating cultural data is not a trivial task; therefore, this work performs a systematic literature review on data aggregation workflows, in order to answer five questions: What are the projects? What are the planned steps? Which technologies are used? Are the steps performed manually, automatically, or semi-automatically? Which perform semantic search? The searches were carried out in three databases: Networked Digital Library of Theses and Dissertations, Scopus and Web of Science. In Q01, 12 projects were selected. In Q02, 9 stages were identified: Harvesting, Ingestion, Mapping, Indexing, Storing, Monitoring, Enriching, Displaying, and Publishing LOD. In Q03, 19 different technologies were found it. In Q04, we identified that most of the solutions are semi-automatic and, in Q05, that most of them perform a semantic search. The analysis of the workflows allowed us to identify that there is no consensus regarding the stages, their nomenclatures, and technologies, besides presenting superficial discussions. But it allowed to identify the main steps for the implementation of the aggregation of cultural data.
  16. Zhu, B.; Chen, H.: Information visualization (2004) 0.03
    0.025277203 = product of:
      0.10110881 = sum of:
        0.10110881 = weight(_text_:objects in 4276) [ClassicSimilarity], result of:
          0.10110881 = score(doc=4276,freq=4.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.29066795 = fieldWeight in 4276, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.25 = coord(1/4)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  17. Rader, H.B.: Library orientation and instruction - 1993 (1994) 0.02
    0.022167558 = product of:
      0.08867023 = sum of:
        0.08867023 = weight(_text_:22 in 209) [ClassicSimilarity], result of:
          0.08867023 = score(doc=209,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.38690117 = fieldWeight in 209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=209)
      0.25 = coord(1/4)
    
    Source
    Reference services review. 22(1994) no.4, S.81-
  18. Gilliland-Swetland, A.: Electronic records management (2004) 0.02
    0.020427063 = product of:
      0.08170825 = sum of:
        0.08170825 = weight(_text_:objects in 4280) [ClassicSimilarity], result of:
          0.08170825 = score(doc=4280,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.23489517 = fieldWeight in 4280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=4280)
      0.25 = coord(1/4)
    
    Abstract
    What is an electronic record, how should it best be preserved and made available, and to what extent do traditional, paradigmatic archival precepts such as provenance, original order, and archival custody hold when managing it? Over more than four decades of work in the area of electronic records (formerly known as machine-readable records), theorists and researchers have offered answers to these questions-or at least devised approaches for trying to answer them. However, a set of fundamental questions about the nature of the record and the applicability of traditional archival theory still confronts researchers seeking to advance knowledge and development in this increasingly active, but contested, area of research. For example, which characteristics differentiate a record from other types of information objects (such as publications or raw research data)? Are these characteristics consistently present regardless of the medium of the record? Does the record always have to have a tangible form? How does the record manifest itself within different technological and procedural contexts, and in particular, how do we determine the parameters of electronic records created in relational, distributed, or dynamic environments that bear little resemblance an the surface to traditional paper-based environments? At the heart of electronic records research lies a dual concern with the nature of the record as a specific type of information object and the nature of legal and historical evidence in a digital world. Electronic records research is relevant to the agendas of many communities in addition to that of archivists. Its emphasis an accountability and an establishing trust in records, for example, addresses concerns that are central to both digital government and e-commerce. Research relating to electronic records is still relatively homogeneous in terms of scope, in that most major research initiatives have addressed various combinations of the following: theory building in terms of identifying the nature of the electronic record, developing alternative conceptual models, establishing the determinants of reliability and authenticity in active and preserved electronic records, identifying functional and metadata requirements for record keeping, developing and testing preservation
  19. Rogers, Y.: New theoretical approaches for human-computer interaction (2003) 0.02
    0.01787368 = product of:
      0.07149472 = sum of:
        0.07149472 = weight(_text_:objects in 4270) [ClassicSimilarity], result of:
          0.07149472 = score(doc=4270,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.20553327 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
      0.25 = coord(1/4)
    
    Abstract
    "Theory weary, theory leery, why can't I be theory cheery?" (Erickson, 2002, p. 269). The field of human-computer interaction (HCI) is rapidly expanding. Alongside the extensive technological developments that are taking place, a profusion of new theories, methods, and concerns has been imported into the field from a range of disciplines and contexts. An extensive critique of recent theoretical developments is presented here together with an overview of HCI practice. A consequence of bringing new theories into the field has been much insightful explication of HCI phenomena and also a broadening of the field's discourse. However, these theoretically based approaches have had limited impact an the practice of interaction design. This chapter discusses why this is so and suggests that different kinds of mechanisms are needed that will enable both designers and researchers to better articulate and theoretically ground the challenges facing them today. Human-computer interaction is bursting at the seams. Its mission, goals, and methods, well established in the '80s, have all greatly expanded to the point that "HCI is now effectively a boundless domain" (Barnard, May, Duke, & Duce, 2000, p. 221). Everything is in a state of flux: The theory driving research is changing, a flurry of new concepts is emerging, the domains and type of users being studied are diversifying, many of the ways of doing design are new, and much of what is being designed is significantly different. Although potentially much is to be gained from such rapid growth, the downside is an increasing lack of direction, structure, and coherence in the field. What was originally a bounded problem space with a clear focus and a small set of methods for designing computer systems that were easier and more efficient to use by a single user is now turning into a diffuse problem space with less clarity in terms of its objects of study, design foci, and investigative methods. Instead, aspirations of overcoming the Digital Divide, by providing universal accessibility, have become major concerns (e.g., Shneiderman, 2002a). The move toward greater openness in the field means that many more topics, areas, and approaches are now considered acceptable in the worlds of research and practice.
  20. Hsueh, D.C.: Recon road maps : retrospective conversion literature, 1980-1990 (1992) 0.02
    0.017734047 = product of:
      0.07093619 = sum of:
        0.07093619 = weight(_text_:22 in 2193) [ClassicSimilarity], result of:
          0.07093619 = score(doc=2193,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.30952093 = fieldWeight in 2193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2193)
      0.25 = coord(1/4)
    
    Source
    Cataloging and classification quarterly. 14(1992) nos.3/4, S.5-22