Search (35 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  1. Galloway, P.: Preservation of digital objects (2003) 0.07
    0.07432476 = product of:
      0.22297426 = sum of:
        0.22297426 = weight(_text_:objects in 4275) [ClassicSimilarity], result of:
          0.22297426 = score(doc=4275,freq=18.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.8808569 = fieldWeight in 4275, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
      0.33333334 = coord(1/3)
    
    Abstract
    The preservation of digital objects (defined here as objects in digital form that require a computer to support their existence and display) is obviously an important practical issue for the information professions, with its importance growing daily as more information objects are produced in, or converted to, digital form. Yakel's (2001) review of the field provided a much-needed introduction. At the same time, the complexity of new digital objects continues to increase, challenging existing preservation efforts (Lee, Skattery, Lu, Tang, & McCrary, 2002). The field of information science itself is beginning to pay some reflexive attention to the creation of fragile and unpreservable digital objects. But these concerns focus often an the practical problems of short-term repurposing of digital objects rather than actual preservation, by which I mean the activity of carrying digital objects from one software generation to another, undertaken for purposes beyond the original reasons for creating the objects. For preservation in this sense to be possible, information science as a discipline needs to be active in the formulation of, and advocacy for, national information policies. Such policies will need to challenge the predominant cultural expectation of planned obsolescence for information resources, and cultural artifacts in general.
  2. Olson, N.B.; Swanson, E.: ¬The year's work in nonbook processing (1989) 0.05
    0.04954984 = product of:
      0.14864951 = sum of:
        0.14864951 = weight(_text_:objects in 8330) [ClassicSimilarity], result of:
          0.14864951 = score(doc=8330,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.58723795 = fieldWeight in 8330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.078125 = fieldNorm(doc=8330)
      0.33333334 = coord(1/3)
    
    Abstract
    Reviews the literature, published in 1988, covering the cataloguing of audio-visual materials including: computerised files; music and sound recordings; film and video; graphic materials and 3-dimensional objects; maps; and preservation.
  3. Rasmussen, E.M.: Indexing images (1997) 0.03
    0.0297299 = product of:
      0.0891897 = sum of:
        0.0891897 = weight(_text_:objects in 2215) [ClassicSimilarity], result of:
          0.0891897 = score(doc=2215,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 2215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2215)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review of methods available for accessing collections of digital images by means of manual and automatic indexing. Distinguishes between concept based indexing, in which images and the objects represented, are manually identified and described in terms of what they are and represent, and content based indexing, in which features of images (such as colours) are automatically identified and extracted. The main discussion is arranged in 6 sections: studies of image systems and their use; approaches to indexing images; image attributes; concept based indexing; content based indexing; and browsing in image retrieval. The performance of current image retrieval systems is largely untested and they still lack an extensive history and tradition of evaluation and standards for assessing performance. Concludes that there is a significant amount of research to be done before image retrieval systems can reach the state of development of text retrieval systems
  4. Davenport, E.; Hall, H.: Organizational Knowledge and Communities of Practice (2002) 0.03
    0.0297299 = product of:
      0.0891897 = sum of:
        0.0891897 = weight(_text_:objects in 4293) [ClassicSimilarity], result of:
          0.0891897 = score(doc=4293,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 4293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=4293)
      0.33333334 = coord(1/3)
    
    Abstract
    A community of practice has recently been defined as "a flexible group of professionals, informally bound by common interests, who interact through interdependent tasks guided by a common purpose thereby embodying a store of common knowledge" (Jubert, 1999, p. 166). The association of communities of practice with the production of collective knowledge has long been recognized, and they have been objects of study for a number of decades in the context of professional communication, particularly communication in science (Abbott, 1988; Bazerman & Paradis, 1991). Recently, however, they have been invoked in the domain of organization studies as sites where people learn and share insights. If, as Stinchcombe suggests, an organization is "a set of stable social relations, dehberately created, with the explicit intention of continuously accomplishing some specific goals or purposes" (Stinchcombe, 1965, p. 142), where does this "flexible" and "embodied" source of knowledge fit? Can communities of practice be harnessed, engineered, and managed like other organizational groups, or does their strength lie in the fact that they operate outside the stable and persistent social relations that characterize the organization?
  5. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.03
    0.0297299 = product of:
      0.0891897 = sum of:
        0.0891897 = weight(_text_:objects in 1978) [ClassicSimilarity], result of:
          0.0891897 = score(doc=1978,freq=8.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.35234275 = fieldWeight in 1978, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.33333334 = coord(1/3)
    
    Abstract
    This chapter examines the nature of semantic relations and their main applications in information science. The nature and types of semantic relations are discussed from the perspectives of linguistics and psychology. An overview of the semantic relations used in knowledge structures such as thesauri and ontologies is provided, as well as the main techniques used in the automatic extraction of semantic relations from text. The chapter then reviews the use of semantic relations in information extraction, information retrieval, question-answering, and automatic text summarization applications. Concepts and relations are the foundation of knowledge and thought. When we look at the world, we perceive not a mass of colors but objects to which we automatically assign category labels. Our perceptual system automatically segments the world into concepts and categories. Concepts are the building blocks of knowledge; relations act as the cement that links concepts into knowledge structures. We spend much of our lives identifying regular associations and relations between objects, events, and processes so that the world has an understandable structure and predictability. Our lives and work depend on the accuracy and richness of this knowledge structure and its web of relations. Relations are needed for reasoning and inferencing. Chaffin and Herrmann (1988b, p. 290) noted that "relations between ideas have long been viewed as basic to thought, language, comprehension, and memory." Aristotle's Metaphysics (Aristotle, 1961; McKeon, expounded on several types of relations. The majority of the 30 entries in a section of the Metaphysics known today as the Philosophical Lexicon referred to relations and attributes, including cause, part-whole, same and opposite, quality (i.e., attribute) and kind-of, and defined different types of each relation. Hume (1955) pointed out that there is a connection between successive ideas in our minds, even in our dreams, and that the introduction of an idea in our mind automatically recalls an associated idea. He argued that all the objects of human reasoning are divided into relations of ideas and matters of fact and that factual reasoning is founded on the cause-effect relation. His Treatise of Human Nature identified seven kinds of relations: resemblance, identity, relations of time and place, proportion in quantity or number, degrees in quality, contrariety, and causation. Mill (1974, pp. 989-1004) discoursed on several types of relations, claiming that all things are either feelings, substances, or attributes, and that attributes can be a quality (which belongs to one object) or a relation to other objects.
  6. Siqueira, J.; Martins, D.L.: Workflow models for aggregating cultural heritage data on the web : a systematic literature review (2022) 0.02
    0.02477492 = product of:
      0.07432476 = sum of:
        0.07432476 = weight(_text_:objects in 464) [ClassicSimilarity], result of:
          0.07432476 = score(doc=464,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.29361898 = fieldWeight in 464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=464)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years, different cultural institutions have made efforts to spread culture through the construction of a unique search interface that integrates their digital objects and facilitates data retrieval for lay users. However, integrating cultural data is not a trivial task; therefore, this work performs a systematic literature review on data aggregation workflows, in order to answer five questions: What are the projects? What are the planned steps? Which technologies are used? Are the steps performed manually, automatically, or semi-automatically? Which perform semantic search? The searches were carried out in three databases: Networked Digital Library of Theses and Dissertations, Scopus and Web of Science. In Q01, 12 projects were selected. In Q02, 9 stages were identified: Harvesting, Ingestion, Mapping, Indexing, Storing, Monitoring, Enriching, Displaying, and Publishing LOD. In Q03, 19 different technologies were found it. In Q04, we identified that most of the solutions are semi-automatic and, in Q05, that most of them perform a semantic search. The analysis of the workflows allowed us to identify that there is no consensus regarding the stages, their nomenclatures, and technologies, besides presenting superficial discussions. But it allowed to identify the main steps for the implementation of the aggregation of cultural data.
  7. Zhu, B.; Chen, H.: Information visualization (2004) 0.02
    0.024525918 = product of:
      0.073577754 = sum of:
        0.073577754 = weight(_text_:objects in 4276) [ClassicSimilarity], result of:
          0.073577754 = score(doc=4276,freq=4.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.29066795 = fieldWeight in 4276, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.33333334 = coord(1/3)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  8. Gilliland-Swetland, A.: Electronic records management (2004) 0.02
    0.019819934 = product of:
      0.0594598 = sum of:
        0.0594598 = weight(_text_:objects in 4280) [ClassicSimilarity], result of:
          0.0594598 = score(doc=4280,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.23489517 = fieldWeight in 4280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=4280)
      0.33333334 = coord(1/3)
    
    Abstract
    What is an electronic record, how should it best be preserved and made available, and to what extent do traditional, paradigmatic archival precepts such as provenance, original order, and archival custody hold when managing it? Over more than four decades of work in the area of electronic records (formerly known as machine-readable records), theorists and researchers have offered answers to these questions-or at least devised approaches for trying to answer them. However, a set of fundamental questions about the nature of the record and the applicability of traditional archival theory still confronts researchers seeking to advance knowledge and development in this increasingly active, but contested, area of research. For example, which characteristics differentiate a record from other types of information objects (such as publications or raw research data)? Are these characteristics consistently present regardless of the medium of the record? Does the record always have to have a tangible form? How does the record manifest itself within different technological and procedural contexts, and in particular, how do we determine the parameters of electronic records created in relational, distributed, or dynamic environments that bear little resemblance an the surface to traditional paper-based environments? At the heart of electronic records research lies a dual concern with the nature of the record as a specific type of information object and the nature of legal and historical evidence in a digital world. Electronic records research is relevant to the agendas of many communities in addition to that of archivists. Its emphasis an accountability and an establishing trust in records, for example, addresses concerns that are central to both digital government and e-commerce. Research relating to electronic records is still relatively homogeneous in terms of scope, in that most major research initiatives have addressed various combinations of the following: theory building in terms of identifying the nature of the electronic record, developing alternative conceptual models, establishing the determinants of reliability and authenticity in active and preserved electronic records, identifying functional and metadata requirements for record keeping, developing and testing preservation
  9. Rogers, Y.: New theoretical approaches for human-computer interaction (2003) 0.02
    0.017342443 = product of:
      0.052027326 = sum of:
        0.052027326 = weight(_text_:objects in 4270) [ClassicSimilarity], result of:
          0.052027326 = score(doc=4270,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.20553327 = fieldWeight in 4270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4270)
      0.33333334 = coord(1/3)
    
    Abstract
    "Theory weary, theory leery, why can't I be theory cheery?" (Erickson, 2002, p. 269). The field of human-computer interaction (HCI) is rapidly expanding. Alongside the extensive technological developments that are taking place, a profusion of new theories, methods, and concerns has been imported into the field from a range of disciplines and contexts. An extensive critique of recent theoretical developments is presented here together with an overview of HCI practice. A consequence of bringing new theories into the field has been much insightful explication of HCI phenomena and also a broadening of the field's discourse. However, these theoretically based approaches have had limited impact an the practice of interaction design. This chapter discusses why this is so and suggests that different kinds of mechanisms are needed that will enable both designers and researchers to better articulate and theoretically ground the challenges facing them today. Human-computer interaction is bursting at the seams. Its mission, goals, and methods, well established in the '80s, have all greatly expanded to the point that "HCI is now effectively a boundless domain" (Barnard, May, Duke, & Duce, 2000, p. 221). Everything is in a state of flux: The theory driving research is changing, a flurry of new concepts is emerging, the domains and type of users being studied are diversifying, many of the ways of doing design are new, and much of what is being designed is significantly different. Although potentially much is to be gained from such rapid growth, the downside is an increasing lack of direction, structure, and coherence in the field. What was originally a bounded problem space with a clear focus and a small set of methods for designing computer systems that were easier and more efficient to use by a single user is now turning into a diffuse problem space with less clarity in terms of its objects of study, design foci, and investigative methods. Instead, aspirations of overcoming the Digital Divide, by providing universal accessibility, have become major concerns (e.g., Shneiderman, 2002a). The move toward greater openness in the field means that many more topics, areas, and approaches are now considered acceptable in the worlds of research and practice.
  10. Enser, P.G.B.: Visual image retrieval (2008) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.10324174 = score(doc=3281,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2012 13:01:26
  11. Morris, S.A.: Mapping research specialties (2008) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.10324174 = score(doc=3962,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 9:30:22
  12. Fallis, D.: Social epistemology and information science (2006) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.10324174 = score(doc=4368,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:22:28
  13. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.10324174 = score(doc=6091,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    13. 7.2008 19:53:22
  14. Metz, A.: Community service : a bibliography (1996) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
              0.10324174 = score(doc=5341,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 5341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5341)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17.10.1996 14:22:33
  15. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.10324174 = score(doc=334,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  16. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
              0.10324174 = score(doc=335,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=335)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  17. Warner, A.J.: Natural language processing (1987) 0.02
    0.017206958 = product of:
      0.05162087 = sum of:
        0.05162087 = product of:
          0.10324174 = sum of:
            0.10324174 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10324174 = score(doc=337,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  18. Börner, K.; Chen, C.; Boyack, K.W.: Visualizing knowledge domains (2002) 0.02
    0.016837582 = product of:
      0.050512742 = sum of:
        0.050512742 = product of:
          0.101025485 = sum of:
            0.101025485 = weight(_text_:fusion in 4286) [ClassicSimilarity], result of:
              0.101025485 = score(doc=4286,freq=2.0), product of:
                0.35273543 = queryWeight, product of:
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.047625583 = queryNorm
                0.28640583 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.406428 = idf(docFreq=72, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4286)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This chapter reviews visualization techniques that can be used to map the ever-growing domain structure of scientific disciplines and to support information retrieval and classification. In contrast to the comprehensive surveys conducted in traditional fashion by Howard White and Katherine McCain (1997, 1998), this survey not only reviews emerging techniques in interactive data analysis and information visualization, but also depicts the bibliographical structure of the field itself. The chapter starts by reviewing the history of knowledge domain visualization. We then present a general process flow for the visualization of knowledge domains and explain commonly used techniques. In order to visualize the domain reviewed by this chapter, we introduce a bibliographic data set of considerable size, which includes articles from the citation analysis, bibliometrics, semantics, and visualization literatures. Using tutorial style, we then apply various algorithms to demonstrate the visualization effectsl produced by different approaches and compare the results. The domain visualizations reveal the relationships within and between the four fields that together constitute the focus of this chapter. We conclude with a general discussion of research possibilities. Painting a "big picture" of scientific knowledge has long been desirable for a variety of reasons. Traditional approaches are brute forcescholars must sort through mountains of literature to perceive the outlines of their field. Obviously, this is time-consuming, difficult to replicate, and entails subjective judgments. The task is enormously complex. Sifting through recently published documents to find those that will later be recognized as important is labor intensive. Traditional approaches struggle to keep up with the pace of information growth. In multidisciplinary fields of study it is especially difficult to maintain an overview of literature dynamics. Painting the big picture of an everevolving scientific discipline is akin to the situation described in the widely known Indian legend about the blind men and the elephant. As the story goes, six blind men were trying to find out what an elephant looked like. They touched different parts of the elephant and quickly jumped to their conclusions. The one touching the body said it must be like a wall; the one touching the tail said it was like a snake; the one touching the legs said it was like a tree trunk, and so forth. But science does not stand still; the steady stream of new scientific literature creates a continuously changing structure. The resulting disappearance, fusion, and emergence of research areas add another twist to the tale-it is as if the elephant is running and dynamically changing its shape. Domain visualization, an emerging field of study, is in a similar situation. Relevant literature is spread across disciplines that have traditionally had few connections. Researchers examining the domain from a particular discipline cannot possibly have an adequate understanding of the whole. As noted by White and McCain (1997), the new generation of information scientists is technically driven in its efforts to visualize scientific disciplines. However, limited progress has been made in terms of connecting pioneers' theories and practices with the potentialities of today's enabling technologies. If the difference between past and present generations lies in the power of available technologies, what they have in common is the ultimate goal-to reveal the development of scientific knowledge.
  19. Grudin, J.: Human-computer interaction (2011) 0.02
    0.015056088 = product of:
      0.045168262 = sum of:
        0.045168262 = product of:
          0.090336524 = sum of:
            0.090336524 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
              0.090336524 = score(doc=1601,freq=2.0), product of:
                0.16677667 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047625583 = queryNorm
                0.5416616 = fieldWeight in 1601, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1601)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27.12.2014 18:54:22
  20. Dumais, S.T.: Latent semantic analysis (2003) 0.01
    0.01486495 = product of:
      0.04459485 = sum of:
        0.04459485 = weight(_text_:objects in 2462) [ClassicSimilarity], result of:
          0.04459485 = score(doc=2462,freq=2.0), product of:
            0.25313336 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.047625583 = queryNorm
            0.17617138 = fieldWeight in 2462, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2462)
      0.33333334 = coord(1/3)
    
    Abstract
    Latent Semantic Analysis (LSA) was first introduced in Dumais, Furnas, Landauer, and Deerwester (1988) and Deerwester, Dumais, Furnas, Landauer, and Harshman (1990) as a technique for improving information retrieval. The key insight in LSA was to reduce the dimensionality of the information retrieval problem. Most approaches to retrieving information depend an a lexical match between words in the user's query and those in documents. Indeed, this lexical matching is the way that the popular Web and enterprise search engines work. Such systems are, however, far from ideal. We are all aware of the tremendous amount of irrelevant information that is retrieved when searching. We also fail to find much of the existing relevant material. LSA was designed to address these retrieval problems, using dimension reduction techniques. Fundamental characteristics of human word usage underlie these retrieval failures. People use a wide variety of words to describe the same object or concept (synonymy). Furnas, Landauer, Gomez, and Dumais (1987) showed that people generate the same keyword to describe well-known objects only 20 percent of the time. Poor agreement was also observed in studies of inter-indexer consistency (e.g., Chan, 1989; Tarr & Borko, 1974) in the generation of search terms (e.g., Fidel, 1985; Bates, 1986), and in the generation of hypertext links (Furner, Ellis, & Willett, 1999). Because searchers and authors often use different words, relevant materials are missed. Someone looking for documents an "human-computer interaction" will not find articles that use only the phrase "man-machine studies" or "human factors." People also use the same word to refer to different things (polysemy). Words like "saturn," "jaguar," or "chip" have several different meanings. A short query like "saturn" will thus return many irrelevant documents. The query "Saturn Gar" will return fewer irrelevant items, but it will miss some documents that use only the terms "Saturn automobile." In searching, there is a constant tension between being overly specific and missing relevant information, and being more general and returning irrelevant information.