Search (94 results, page 1 of 5)

  • × theme_ss:"Literaturübersicht"
  • × year_i:[2000 TO 2010}
  1. Corbett, L.E.: Serials: review of the literature 2000-2003 (2006) 0.06
    0.055950508 = product of:
      0.16785152 = sum of:
        0.017552461 = weight(_text_:of in 1088) [ClassicSimilarity], result of:
          0.017552461 = score(doc=1088,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28651062 = fieldWeight in 1088, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.020439833 = weight(_text_:systems in 1088) [ClassicSimilarity], result of:
          0.020439833 = score(doc=1088,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 1088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1088)
        0.12985922 = sum of:
          0.103319705 = weight(_text_:packages in 1088) [ClassicSimilarity], result of:
            0.103319705 = score(doc=1088,freq=2.0), product of:
              0.2706874 = queryWeight, product of:
                6.9093957 = idf(docFreq=119, maxDocs=44218)
                0.03917671 = queryNorm
              0.3816938 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.9093957 = idf(docFreq=119, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
          0.026539518 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
            0.026539518 = score(doc=1088,freq=2.0), product of:
              0.13719016 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03917671 = queryNorm
              0.19345059 = fieldWeight in 1088, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1088)
      0.33333334 = coord(3/9)
    
    Abstract
    The topic of electronic journals (e-journals) dominated the serials literature from 2000 to 2003. This review is limited to the events and issues within the broad topics of cost, management, and archiving. Coverage of cost includes such initiatives as PEAK, JACC, BioMed Central, SPARC, open access, the "Big Deal," and "going e-only." Librarians combated the continued price increase trend for journals, fueled in part by publisher mergers, with the economies found with bundled packages and consortial subscriptions. Serials management topics include usage statistics; core title lists; staffing needs; the "A-Z list" and other services from such companies as Serials Solutions; "deep linking"; link resolvers such as SFX; development of standards or guidelines, such as COUNTER and ERMI; tracking of license terms; vendor mergers; and the demise of integrated library systems and a subscription agent's bankruptcy. Librarians archived print volumes in storage facilities due to space shortages. Librarians and publishers struggled with electronic archiving concepts, discussing questions of who, where, and how. Projects such as LOCKSS tested potential solutions, but missing online content due to the Tasini court case and retractions posed more archiving difficulties. The serials literature captured much of the upheaval resulting from the rapid pace of changes, many linked to the advent of e-journals.
    Date
    10. 9.2000 17:38:22
  2. Zhu, B.; Chen, H.: Information visualization (2004) 0.05
    0.050138313 = product of:
      0.1128112 = sum of:
        0.029363085 = weight(_text_:applications in 4276) [ClassicSimilarity], result of:
          0.029363085 = score(doc=4276,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.17024462 = fieldWeight in 4276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
        0.02191663 = weight(_text_:of in 4276) [ClassicSimilarity], result of:
          0.02191663 = score(doc=4276,freq=70.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.35774738 = fieldWeight in 4276, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
        0.020234404 = weight(_text_:systems in 4276) [ClassicSimilarity], result of:
          0.020234404 = score(doc=4276,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.16806422 = fieldWeight in 4276, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
        0.041297078 = weight(_text_:software in 4276) [ClassicSimilarity], result of:
          0.041297078 = score(doc=4276,freq=6.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.26571283 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.44444445 = coord(4/9)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
    Source
    Annual review of information science and technology. 39(2005), S.139-177
  3. El-Sherbini, M.A.: Cataloging and classification : review of the literature 2005-06 (2008) 0.03
    0.034337863 = product of:
      0.10301359 = sum of:
        0.06711562 = weight(_text_:applications in 249) [ClassicSimilarity], result of:
          0.06711562 = score(doc=249,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.014666359 = weight(_text_:of in 249) [ClassicSimilarity], result of:
          0.014666359 = score(doc=249,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23940048 = fieldWeight in 249, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.021231614 = product of:
          0.042463228 = sum of:
            0.042463228 = weight(_text_:22 in 249) [ClassicSimilarity], result of:
              0.042463228 = score(doc=249,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.30952093 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper reviews library literature on cataloging and classification published in 2005-06. It covers pertinent literature in the following areas: the future of cataloging; Functional Requirement for Bibliographic Records (FRBR); metadata and its applications and relation to Machine-Readable Cataloging (MARC); cataloging tools and standards; authority control; and recruitment, training, and the changing role of catalogers.
    Date
    10. 9.2000 17:38:22
  4. Chowdhury, G.G.: Natural language processing (2002) 0.03
    0.034328938 = product of:
      0.10298681 = sum of:
        0.050336715 = weight(_text_:applications in 4284) [ClassicSimilarity], result of:
          0.050336715 = score(doc=4284,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.017962547 = weight(_text_:of in 4284) [ClassicSimilarity], result of:
          0.017962547 = score(doc=4284,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 4284, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.034687545 = weight(_text_:systems in 4284) [ClassicSimilarity], result of:
          0.034687545 = score(doc=4284,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 4284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
      0.33333334 = coord(3/9)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
    Source
    Annual review of information science and technology. 37(2003), S.51-90
  5. Marsh, S.; Dibben, M.R.: ¬The role of trust in information science and technology (2002) 0.03
    0.032587454 = product of:
      0.09776236 = sum of:
        0.050336715 = weight(_text_:applications in 4289) [ClassicSimilarity], result of:
          0.050336715 = score(doc=4289,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 4289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=4289)
        0.022897845 = weight(_text_:of in 4289) [ClassicSimilarity], result of:
          0.022897845 = score(doc=4289,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37376386 = fieldWeight in 4289, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4289)
        0.0245278 = weight(_text_:systems in 4289) [ClassicSimilarity], result of:
          0.0245278 = score(doc=4289,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 4289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4289)
      0.33333334 = coord(3/9)
    
    Abstract
    This chapter discusses the notion of trust as it relates to information science and technology, specifically user interfaces, autonomous agents, and information systems. We first present an in-depth discussion of the concept of trust in and of itself, moving an to applications and considerations of trust in relation to information technologies. We consider trust from a "soft" perspective-thus, although security concepts such as cryptography, virus protection, authentication, and so forth reinforce (or damage) the feelings of trust we may have in a system, they are not themselves constitutive of "trust." We discuss information technology from a human-centric viewpoint, where trust is a less well-structured but much more powerful phenomenon. With the proliferation of electronic commerce (e-commerce) and the World Wide Web (WWW, or Web), much has been made of the ability of individuals to explore the vast quantities of information available to them, to purchase goods (as diverse as vacations and cars) online, and to publish information an their personal Web sites.
    Source
    Annual review of information science and technology. 37(2003), S.465-498
  6. Miksa, S.D.: ¬The challenges of change : a review of cataloging and classification literature, 2003-2004 (2007) 0.02
    0.024892237 = product of:
      0.07467671 = sum of:
        0.020741362 = weight(_text_:of in 266) [ClassicSimilarity], result of:
          0.020741362 = score(doc=266,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 266, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=266)
        0.03270373 = weight(_text_:systems in 266) [ClassicSimilarity], result of:
          0.03270373 = score(doc=266,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=266)
        0.021231614 = product of:
          0.042463228 = sum of:
            0.042463228 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.042463228 = score(doc=266,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.30952093 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper reviews the enormous changes in cataloging and classification reflected in the literature of 2003 and 2004, and discusses major themes and issues. Traditional cataloging and classification tools have been re-vamped and new resources have emerged. Most notable themes are: the continuing influence of the Functional Requirements for Bibliographic Control (FRBR); the struggle to understand the ever-broadening concept of an "information entity"; steady developments in metadata-encoding standards; and the globalization of information systems, including multilinguistic challenges.
    Date
    10. 9.2000 17:38:22
  7. Khoo, S.G.; Na, J.-C.: Semantic relations in information science (2006) 0.02
    0.024699276 = product of:
      0.07409783 = sum of:
        0.043592874 = weight(_text_:applications in 1978) [ClassicSimilarity], result of:
          0.043592874 = score(doc=1978,freq=6.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2527477 = fieldWeight in 1978, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
        0.018241053 = weight(_text_:of in 1978) [ClassicSimilarity], result of:
          0.018241053 = score(doc=1978,freq=66.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2977506 = fieldWeight in 1978, product of:
              8.124039 = tf(freq=66.0), with freq of:
                66.0 = termFreq=66.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
        0.0122639 = weight(_text_:systems in 1978) [ClassicSimilarity], result of:
          0.0122639 = score(doc=1978,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1018623 = fieldWeight in 1978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1978)
      0.33333334 = coord(3/9)
    
    Abstract
    This chapter examines the nature of semantic relations and their main applications in information science. The nature and types of semantic relations are discussed from the perspectives of linguistics and psychology. An overview of the semantic relations used in knowledge structures such as thesauri and ontologies is provided, as well as the main techniques used in the automatic extraction of semantic relations from text. The chapter then reviews the use of semantic relations in information extraction, information retrieval, question-answering, and automatic text summarization applications. Concepts and relations are the foundation of knowledge and thought. When we look at the world, we perceive not a mass of colors but objects to which we automatically assign category labels. Our perceptual system automatically segments the world into concepts and categories. Concepts are the building blocks of knowledge; relations act as the cement that links concepts into knowledge structures. We spend much of our lives identifying regular associations and relations between objects, events, and processes so that the world has an understandable structure and predictability. Our lives and work depend on the accuracy and richness of this knowledge structure and its web of relations. Relations are needed for reasoning and inferencing. Chaffin and Herrmann (1988b, p. 290) noted that "relations between ideas have long been viewed as basic to thought, language, comprehension, and memory." Aristotle's Metaphysics (Aristotle, 1961; McKeon, expounded on several types of relations. The majority of the 30 entries in a section of the Metaphysics known today as the Philosophical Lexicon referred to relations and attributes, including cause, part-whole, same and opposite, quality (i.e., attribute) and kind-of, and defined different types of each relation. Hume (1955) pointed out that there is a connection between successive ideas in our minds, even in our dreams, and that the introduction of an idea in our mind automatically recalls an associated idea. He argued that all the objects of human reasoning are divided into relations of ideas and matters of fact and that factual reasoning is founded on the cause-effect relation. His Treatise of Human Nature identified seven kinds of relations: resemblance, identity, relations of time and place, proportion in quantity or number, degrees in quality, contrariety, and causation. Mill (1974, pp. 989-1004) discoursed on several types of relations, claiming that all things are either feelings, substances, or attributes, and that attributes can be a quality (which belongs to one object) or a relation to other objects.
    Linguists in the structuralist tradition (e.g., Lyons, 1977; Saussure, 1959) have asserted that concepts cannot be defined on their own but only in relation to other concepts. Semantic relations appear to reflect a logical structure in the fundamental nature of thought (Caplan & Herrmann, 1993). Green, Bean, and Myaeng (2002) noted that semantic relations play a critical role in how we represent knowledge psychologically, linguistically, and computationally, and that many systems of knowledge representation start with a basic distinction between entities and relations. Green (2001, p. 3) said that "relationships are involved as we combine simple entities to form more complex entities, as we compare entities, as we group entities, as one entity performs a process on another entity, and so forth. Indeed, many things that we might initially regard as basic and elemental are revealed upon further examination to involve internal structure, or in other words, internal relationships." Concepts and relations are often expressed in language and text. Language is used not just for communicating concepts and relations, but also for representing, storing, and reasoning with concepts and relations. We shall examine the nature of semantic relations from a linguistic and psychological perspective, with an emphasis on relations expressed in text. The usefulness of semantic relations in information science, especially in ontology construction, information extraction, information retrieval, question-answering, and text summarization is discussed. Research and development in information science have focused on concepts and terms, but the focus will increasingly shift to the identification, processing, and management of relations to achieve greater effectiveness and refinement in information science techniques. Previous chapters in ARIST on natural language processing (Chowdhury, 2003), text mining (Trybula, 1999), information retrieval and the philosophy of language (Blair, 2003), and query expansion (Efthimiadis, 1996) provide a background for this discussion, as semantic relations are an important part of these applications.
    Source
    Annual review of information science and technology. 40(2006), S.157-228
  8. Williams, P.; Nicholas, D.; Gunter, B.: E-learning: what the literature tells us about distance education : an overview (2005) 0.02
    0.022628155 = product of:
      0.06788446 = sum of:
        0.03355781 = weight(_text_:applications in 662) [ClassicSimilarity], result of:
          0.03355781 = score(doc=662,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.19456528 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03125 = fieldNorm(doc=662)
        0.011201616 = weight(_text_:of in 662) [ClassicSimilarity], result of:
          0.011201616 = score(doc=662,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.18284513 = fieldWeight in 662, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=662)
        0.023125032 = weight(_text_:systems in 662) [ClassicSimilarity], result of:
          0.023125032 = score(doc=662,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.19207339 = fieldWeight in 662, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=662)
      0.33333334 = coord(3/9)
    
    Abstract
    Purpose - The CIBER group at University College London are currently evaluating a distance education initiative funded by the Department of Health, providing in-service training to NHS staff via DiTV and satellite to PC systems. This paper aims to provide the context for the project by outlining a short history of distance education, describing the media used in providing remote education, and to review research literature on achievement, attitude, barriers to learning and learner characteristics. Design/methodology/approach - Literature review, with particular, although not exclusive, emphasis on health. Findings - The literature shows little difference in achievement between distance and traditional learners, although using a variety of media, both to deliver pedagogic material and to facilitate communication, does seem to enhance learning. Similarly, attitudinal studies appear to show that the greater number of channels offered, the more positive students are about their experiences. With regard to barriers to completing courses, the main problems appear to be family or work obligations. Research limitations/implications - The research work this review seeks to consider is examining "on-demand" showing of filmed lectures via a DiTV system. The literature on DiTV applications research, however, is dominated by studies of simultaneous viewing by on-site and remote students, rather than "on-demand". Practical implications - Current research being carried out by the authors should enhance the findings accrued by the literature, by exploring the impact of "on-demand" video material, delivered by DiTV - something no previous research appears to have examined. Originality/value - Discusses different electronic systems and their exploitation for distance education, and cross-references these with several aspects evaluated in the literature: achievement, attitude, barriers to take-up or success, to provide a holistic picture hitherto missing from the literature.
  9. Genereux, C.: Building connections : a review of the serials literature 2004 through 2005 (2007) 0.02
    0.022471227 = product of:
      0.06741368 = sum of:
        0.016802425 = weight(_text_:of in 2548) [ClassicSimilarity], result of:
          0.016802425 = score(doc=2548,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 2548, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2548)
        0.034687545 = weight(_text_:systems in 2548) [ClassicSimilarity], result of:
          0.034687545 = score(doc=2548,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 2548, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2548)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2548) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2548,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This review of 2004 and 2005 serials literature covers the themes of cost, management, and access. Interwoven through the serials literature of these two years are the importance of collaboration, communication, and linkages between scholars, publishers, subscription agents and other intermediaries, and librarians. The emphasis in the literature is on electronic serials and their impact on publishing, libraries, and vendors. In response to the crisis of escalating journal prices and libraries' dissatisfaction with the Big Deal licensing agreements, Open Access journals and publishing models were promoted. Libraries subscribed to or licensed increasing numbers of electronic serials. As a result, libraries sought ways to better manage licensing and subscription data (not handled by traditional integrated library systems) by implementing electronic resources management systems. In order to provide users with better, faster, and more current information on and access to electronic serials, libraries implemented tools and services to provide A-Z title lists, title by title coverage data, MARC records, and OpenURL link resolvers.
    Date
    10. 9.2000 17:38:22
  10. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.02
    0.01981098 = product of:
      0.08914941 = sum of:
        0.07118686 = weight(_text_:applications in 4242) [ClassicSimilarity], result of:
          0.07118686 = score(doc=4242,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.41273528 = fieldWeight in 4242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
        0.017962547 = weight(_text_:of in 4242) [ClassicSimilarity], result of:
          0.017962547 = score(doc=4242,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 4242, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
      0.22222222 = coord(2/9)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
    Source
    Annual review of information science and technology. 38(2004), S.289-330
  11. Denton, W.: Putting facets on the Web : an annotated bibliography (2003) 0.02
    0.018478356 = product of:
      0.05543507 = sum of:
        0.010910287 = weight(_text_:of in 2467) [ClassicSimilarity], result of:
          0.010910287 = score(doc=2467,freq=34.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.17808972 = fieldWeight in 2467, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.020439833 = weight(_text_:systems in 2467) [ClassicSimilarity], result of:
          0.020439833 = score(doc=2467,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 2467, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
        0.024084946 = weight(_text_:software in 2467) [ClassicSimilarity], result of:
          0.024084946 = score(doc=2467,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.15496688 = fieldWeight in 2467, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2467)
      0.33333334 = coord(3/9)
    
    Abstract
    This is a classified, annotated bibliography about how to design faceted classification systems and make them usable on the World Wide Web. It is the first of three works I will be doing. The second, based on the material here and elsewhere, will discuss how to actually make the faceted system and put it online. The third will be a report of how I did just that, what worked, what didn't, and what I learned. Almost every article or book listed here begins with an explanation of what a faceted classification system is, so I won't (but see Steckel in Background below if you don't already know). They all agree that faceted systems are very appropriate for the web. Even pre-web articles (such as Duncan's in Background, below) assert that hypertext and facets will go together well. Combined, it is possible to take a set of documents and classify them or apply subject headings to describe what they are about, then build a navigational structure so that any user, no matter how he or she approaches the material, no matter what his or her goals, can move and search in a way that makes sense to them, but still get to the same useful results as someone else following a different path to the same goal. There is no one way that everyone will always use when looking for information. The more flexible the organization of the information, the more accommodating it is. Facets are more flexible for hypertext browsing than any enumerative or hierarchical system.
    Consider movie listings in newspapers. Most Canadian newspapers list movie showtimes in two large blocks, for the two major theatre chains. The listings are ordered by region (in large cities), then theatre, then movie, and finally by showtime. Anyone wondering where and when a particular movie is playing must scan the complete listings. Determining what movies are playing in the next half hour is very difficult. When movie listings went onto the web, most sites used a simple faceted organization, always with movie name and theatre, and perhaps with region or neighbourhood (thankfully, theatre chains were left out). They make it easy to pick a theatre and see what movies are playing there, or to pick a movie and see what theatres are showing it. To complete the system, the sites should allow users to browse by neighbourhood and showtime, and to order the results in any way they desired. Thus could people easily find answers to such questions as, "Where is the new James Bond movie playing?" "What's showing at the Roxy tonight?" "I'm going to be out in in Little Finland this afternoon with three hours to kill starting at 2 ... is anything interesting playing?" A hypertext, faceted classification system makes more useful information more easily available to the user. Reading the books and articles below in chronological order will show a certain progression: suggestions that faceting and hypertext might work well, confidence that facets would work well if only someone would make such a system, and finally the beginning of serious work on actually designing, building, and testing faceted web sites. There is a solid basis of how to make faceted classifications (see Vickery in Recommended), but their application online is just starting. Work on XFML (see Van Dijck's work in Recommended) the Exchangeable Faceted Metadata Language, will make this easier. If it follows previous patterns, parts of the Internet community will embrace the idea and make open source software available for others to reuse. It will be particularly beneficial if professionals in both information studies and computer science can work together to build working systems, standards, and code. Each can benefit from the other's expertise in what can be a very complicated and technical area. One particularly nice thing about this area of research is that people interested in combining facets and the web often have web sites where they post their writings.
    This bibliography is not meant to be exhaustive, but unfortunately it is not as complete as I wanted. Some books and articles are not be included, but they may be used in my future work. (These include two books and one article by B.C. Vickery: Faceted Classification Schemes (New Brunswick, NJ: Rutgers, 1966), Classification and Indexing in Science, 3rd ed. (London: Butterworths, 1975), and "Knowledge Representation: A Brief Review" (Journal of Documentation 42 no. 3 (September 1986): 145-159; and A.C. Foskett's "The Future of Faceted Classification" in The Future of Classification, edited by Rita Marcella and Arthur Maltby (Aldershot, England: Gower, 2000): 69-80). Nevertheless, I hope this bibliography will be useful for those both new to or familiar with faceted hypertext systems. Some very basic resources are listed, as well as some very advanced ones. Some example web sites are mentioned, but there is no detailed technical discussion of any software. The user interface to any web site is extremely important, and this is briefly mentioned in two or three places (for example the discussion of lawforwa.org (see Example Web Sites)). The larger question of how to display information graphically and with hypertext is outside the scope of this bibliography. There are five sections: Recommended, Background, Not Relevant, Example Web Sites, and Mailing Lists. Background material is either introductory, advanced, or of peripheral interest, and can be read after the Recommended resources if the reader wants to know more. The Not Relevant category contains articles that may appear in bibliographies but are not relevant for my purposes.
  12. Hunter, J.: Collaborative semantic tagging and annotation systems (2009) 0.02
    0.018298382 = product of:
      0.082342714 = sum of:
        0.016935252 = weight(_text_:of in 7382) [ClassicSimilarity], result of:
          0.016935252 = score(doc=7382,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 7382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=7382)
        0.06540746 = weight(_text_:systems in 7382) [ClassicSimilarity], result of:
          0.06540746 = score(doc=7382,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 7382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.125 = fieldNorm(doc=7382)
      0.22222222 = coord(2/9)
    
    Source
    Annual review of information science and technology. 43(2009), S.xxx-xxx
  13. Liu, X.; Croft, W.B.: Statistical language modeling for information retrieval (2004) 0.02
    0.017583163 = product of:
      0.079124235 = sum of:
        0.059322387 = weight(_text_:applications in 4277) [ClassicSimilarity], result of:
          0.059322387 = score(doc=4277,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34394607 = fieldWeight in 4277, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4277)
        0.019801848 = weight(_text_:of in 4277) [ClassicSimilarity], result of:
          0.019801848 = score(doc=4277,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32322758 = fieldWeight in 4277, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4277)
      0.22222222 = coord(2/9)
    
    Abstract
    This chapter reviews research and applications in statistical language modeling for information retrieval (IR), which has emerged within the past several years as a new probabilistic framework for describing information retrieval processes. Generally speaking, statistical language modeling, or more simply language modeling (LM), involves estimating a probability distribution that captures statistical regularities of natural language use. Applied to information retrieval, language modeling refers to the problem of estimating the likelihood that a query and a document could have been generated by the same language model, given the language model of the document either with or without a language model of the query. The roots of statistical language modeling date to the beginning of the twentieth century when Markov tried to model letter sequences in works of Russian literature (Manning & Schütze, 1999). Zipf (1929, 1932, 1949, 1965) studied the statistical properties of text and discovered that the frequency of works decays as a Power function of each works rank. However, it was Shannon's (1951) work that inspired later research in this area. In 1951, eager to explore the applications of his newly founded information theory to human language, Shannon used a prediction game involving n-grams to investigate the information content of English text. He evaluated n-gram models' performance by comparing their crossentropy an texts with the true entropy estimated using predictions made by human subjects. For many years, statistical language models have been used primarily for automatic speech recognition. Since 1980, when the first significant language model was proposed (Rosenfeld, 2000), statistical language modeling has become a fundamental component of speech recognition, machine translation, and spelling correction.
    Source
    Annual review of information science and technology. 39(2005), S.3-32
  14. El-Sherbini, M.: Selected cataloging tools on the Internet (2003) 0.01
    0.0147717865 = product of:
      0.06647304 = sum of:
        0.011975031 = weight(_text_:of in 1997) [ClassicSimilarity], result of:
          0.011975031 = score(doc=1997,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 1997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1997)
        0.054498006 = weight(_text_:software in 1997) [ClassicSimilarity], result of:
          0.054498006 = score(doc=1997,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 1997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1997)
      0.22222222 = coord(2/9)
    
    Abstract
    This bibliography contains selected cataloging tools an the Internet. It is divided into seven sections as follows: authority management and subject headings tools; cataloging tools by type of materials; dictionaries, encyclopedias, and place names; listservs and workshops; software and vendors; technical service professional organizations; and journals and newsletters. Resources are arranged in alphabetical order under each topic. Selected cataloging tools are annotated. There is some overlap since a given web site can cover many tools.
    Source
    Journal of Internet cataloging. 6(2003) no.2, S.35-90
  15. Enser, P.G.B.: Visual image retrieval (2008) 0.01
    0.013199662 = product of:
      0.05939848 = sum of:
        0.016935252 = weight(_text_:of in 3281) [ClassicSimilarity], result of:
          0.016935252 = score(doc=3281,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 3281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=3281)
        0.042463228 = product of:
          0.084926456 = sum of:
            0.084926456 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.084926456 = score(doc=3281,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.61904186 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 1.2012 13:01:26
    Source
    Annual review of information science and technology. 42(2008), S.3-42
  16. Morris, S.A.: Mapping research specialties (2008) 0.01
    0.013199662 = product of:
      0.05939848 = sum of:
        0.016935252 = weight(_text_:of in 3962) [ClassicSimilarity], result of:
          0.016935252 = score(doc=3962,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 3962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=3962)
        0.042463228 = product of:
          0.084926456 = sum of:
            0.084926456 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
              0.084926456 = score(doc=3962,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.61904186 = fieldWeight in 3962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    13. 7.2008 9:30:22
    Source
    Annual review of information science and technology. 42(2008), S.xxx-xxx
  17. Fallis, D.: Social epistemology and information science (2006) 0.01
    0.013199662 = product of:
      0.05939848 = sum of:
        0.016935252 = weight(_text_:of in 4368) [ClassicSimilarity], result of:
          0.016935252 = score(doc=4368,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 4368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=4368)
        0.042463228 = product of:
          0.084926456 = sum of:
            0.084926456 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
              0.084926456 = score(doc=4368,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.61904186 = fieldWeight in 4368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4368)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    13. 7.2008 19:22:28
    Source
    Annual review of information science and technology. 40(2006), S.xxx-xxx
  18. Nicolaisen, J.: Citation analysis (2007) 0.01
    0.013199662 = product of:
      0.05939848 = sum of:
        0.016935252 = weight(_text_:of in 6091) [ClassicSimilarity], result of:
          0.016935252 = score(doc=6091,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
        0.042463228 = product of:
          0.084926456 = sum of:
            0.084926456 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.084926456 = score(doc=6091,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    13. 7.2008 19:53:22
    Source
    Annual review of information science and technology. 41(2007), S.xxx-xxx
  19. Vakkari, P.: Task-based information searching (2002) 0.01
    0.013174627 = product of:
      0.05928582 = sum of:
        0.016802425 = weight(_text_:of in 4288) [ClassicSimilarity], result of:
          0.016802425 = score(doc=4288,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 4288, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4288)
        0.042483397 = weight(_text_:systems in 4288) [ClassicSimilarity], result of:
          0.042483397 = score(doc=4288,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 4288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4288)
      0.22222222 = coord(2/9)
    
    Abstract
    The rationale for using information systems is to find information that helps us in our daily activities, be they tasks or interests. Systems are expected to support us in searching for and identifying useful information. Although the activities and tasks performed by humans generate information needs and searching, they have attracted little attention in studies of information searching. Such studies have concentrated an search tasks rather than the activities that trigger them. It is obvious that our understanding of information searching is only partial, if we are not able to connect aspects of searching to the related task. The expected contribution of information to the task is reflected in relevance assessments of the information items found, and in the search tactics and use of the system in general. Taking the task into account seems to be a necessary condition for understanding and explaining information searching, and, by extension, for effective systems design.
    Source
    Annual review of information science and technology. 37(2003), S.413-464
  20. Galloway, P.: Preservation of digital objects (2003) 0.01
    0.011643156 = product of:
      0.052394204 = sum of:
        0.018332949 = weight(_text_:of in 4275) [ClassicSimilarity], result of:
          0.018332949 = score(doc=4275,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 4275, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
        0.034061253 = weight(_text_:software in 4275) [ClassicSimilarity], result of:
          0.034061253 = score(doc=4275,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.21915624 = fieldWeight in 4275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4275)
      0.22222222 = coord(2/9)
    
    Abstract
    The preservation of digital objects (defined here as objects in digital form that require a computer to support their existence and display) is obviously an important practical issue for the information professions, with its importance growing daily as more information objects are produced in, or converted to, digital form. Yakel's (2001) review of the field provided a much-needed introduction. At the same time, the complexity of new digital objects continues to increase, challenging existing preservation efforts (Lee, Skattery, Lu, Tang, & McCrary, 2002). The field of information science itself is beginning to pay some reflexive attention to the creation of fragile and unpreservable digital objects. But these concerns focus often an the practical problems of short-term repurposing of digital objects rather than actual preservation, by which I mean the activity of carrying digital objects from one software generation to another, undertaken for purposes beyond the original reasons for creating the objects. For preservation in this sense to be possible, information science as a discipline needs to be active in the formulation of, and advocacy for, national information policies. Such policies will need to challenge the predominant cultural expectation of planned obsolescence for information resources, and cultural artifacts in general.
    Source
    Annual review of information science and technology. 38(2004), S.549-590

Languages

  • e 93
  • d 1
  • More… Less…

Types

  • a 89
  • b 8
  • m 3
  • el 2
  • s 1
  • More… Less…