Search (31 results, page 1 of 2)

  • × theme_ss:"Literaturübersicht"
  1. Enser, P.G.B.: Visual image retrieval (2008) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
          0.13836013 = score(doc=3281,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 3281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=3281)
      0.25 = coord(1/4)
    
    Date
    22. 1.2012 13:01:26
  2. Morris, S.A.: Mapping research specialties (2008) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 3962) [ClassicSimilarity], result of:
          0.13836013 = score(doc=3962,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 3962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=3962)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 9:30:22
  3. Fallis, D.: Social epistemology and information science (2006) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 4368) [ClassicSimilarity], result of:
          0.13836013 = score(doc=4368,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 4368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4368)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:22:28
  4. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
          0.13836013 = score(doc=6091,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  5. Metz, A.: Community service : a bibliography (1996) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 5341) [ClassicSimilarity], result of:
          0.13836013 = score(doc=5341,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 5341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=5341)
      0.25 = coord(1/4)
    
    Date
    17.10.1996 14:22:33
  6. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
          0.13836013 = score(doc=334,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 334, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=334)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  7. Smith, L.C.: Artificial intelligence and information retrieval (1987) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 335) [ClassicSimilarity], result of:
          0.13836013 = score(doc=335,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=335)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.41-77
  8. Warner, A.J.: Natural language processing (1987) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
          0.13836013 = score(doc=337,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  9. Grudin, J.: Human-computer interaction (2011) 0.03
    0.030266277 = product of:
      0.12106511 = sum of:
        0.12106511 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
          0.12106511 = score(doc=1601,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.5416616 = fieldWeight in 1601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=1601)
      0.25 = coord(1/4)
    
    Date
    27.12.2014 18:54:22
  10. Tramullas, J.: Temas y métodos de investigación en Ciencia de la Información, 2000-2019 : Revisión bibliográfica (2020) 0.03
    0.03025792 = product of:
      0.12103168 = sum of:
        0.12103168 = weight(_text_:fields in 5929) [ClassicSimilarity], result of:
          0.12103168 = score(doc=5929,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38295972 = fieldWeight in 5929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5929)
      0.25 = coord(1/4)
    
    Abstract
    A systematic literature review is carried out, detailing the research topics and the methods and techniques used in information science in studies published between 2000 and 2019. The results obtained allow us to affirm that there is no consensus on the core topics of information science, as these evolve and change dynamically in relation to other disciplines, and with the dominant social and cultural contexts. With regard to the research methods and techniques, it can be stated that they have mostly been adopted from social sciences, with the addition of numerical methods, especially in the fields of bibliometric and scientometric research.
  11. Docotor, R.D.: Social equity and information technologies : moving toward information democracy (1992) 0.03
    0.02593536 = product of:
      0.10374144 = sum of:
        0.10374144 = weight(_text_:fields in 346) [ClassicSimilarity], result of:
          0.10374144 = score(doc=346,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=346)
      0.25 = coord(1/4)
    
    Abstract
    Explores the concept of information democracy, noting that is has roots in several fields: political science; sociology; social work; communication science; and library and information science; butthat it is explicitly recognized only in library and information science. Focuses on the interplay between information technologies and society and on the theme of social equity and the distribution and use of information resources. When dealing with information democracy there is a focus on the information poor: a population that goes beyond the economically poor to include the aged, disabled, those in rural areas, and those in schools. Traces the historical origins leading to the concerns for social equity and information technologies. Notes that there is power associated with the control of information resources as well as with the control of economic and political resources. Looks at social equity and information technology in several broad areas
  12. Chowdhury, G.G.: Natural language processing (2002) 0.03
    0.02593536 = product of:
      0.10374144 = sum of:
        0.10374144 = weight(_text_:fields in 4284) [ClassicSimilarity], result of:
          0.10374144 = score(doc=4284,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
      0.25 = coord(1/4)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
  13. Solomon, S.: Discovering information in context (2002) 0.03
    0.02593536 = product of:
      0.10374144 = sum of:
        0.10374144 = weight(_text_:fields in 4294) [ClassicSimilarity], result of:
          0.10374144 = score(doc=4294,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 4294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=4294)
      0.25 = coord(1/4)
    
    Abstract
    This chapter has three purposes: to illuminate the ways in which people discover, shape, or create information as part of their lives and work; to consider how the resources and rules of people's situations facilitate or limit discovery of information; and to introduce the idea of a sociotechnical systems design science that is founded in part an understanding the discovery of information in context. In addressing these purposes the chapter focuses an both theoretical and research works in information studies and related fields that shed light on information as something that is embedded in the fabric of people's lives and work. Thus, the discovery of information view presented here characterizes information as being constructed through involvement in life's activities, problems, tasks, and social and technological structures, as opposed to being independent and context free. Given this process view, discovering information entails engagement, reflection, learning, and action-all the behaviors that research subjects often speak of as making sense-above and beyond the traditional focus of the information studies field: seeking without consideration of connections across time.
  14. Rader, H.B.: Library orientation and instruction - 1993 (1994) 0.02
    0.02161877 = product of:
      0.08647508 = sum of:
        0.08647508 = weight(_text_:22 in 209) [ClassicSimilarity], result of:
          0.08647508 = score(doc=209,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.38690117 = fieldWeight in 209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=209)
      0.25 = coord(1/4)
    
    Source
    Reference services review. 22(1994) no.4, S.81-
  15. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 4279) [ClassicSimilarity], result of:
          0.0864512 = score(doc=4279,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  16. Börner, K.; Chen, C.; Boyack, K.W.: Visualizing knowledge domains (2002) 0.02
    0.021395579 = product of:
      0.085582316 = sum of:
        0.085582316 = weight(_text_:fields in 4286) [ClassicSimilarity], result of:
          0.085582316 = score(doc=4286,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.2707934 = fieldWeight in 4286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4286)
      0.25 = coord(1/4)
    
    Abstract
    This chapter reviews visualization techniques that can be used to map the ever-growing domain structure of scientific disciplines and to support information retrieval and classification. In contrast to the comprehensive surveys conducted in traditional fashion by Howard White and Katherine McCain (1997, 1998), this survey not only reviews emerging techniques in interactive data analysis and information visualization, but also depicts the bibliographical structure of the field itself. The chapter starts by reviewing the history of knowledge domain visualization. We then present a general process flow for the visualization of knowledge domains and explain commonly used techniques. In order to visualize the domain reviewed by this chapter, we introduce a bibliographic data set of considerable size, which includes articles from the citation analysis, bibliometrics, semantics, and visualization literatures. Using tutorial style, we then apply various algorithms to demonstrate the visualization effectsl produced by different approaches and compare the results. The domain visualizations reveal the relationships within and between the four fields that together constitute the focus of this chapter. We conclude with a general discussion of research possibilities. Painting a "big picture" of scientific knowledge has long been desirable for a variety of reasons. Traditional approaches are brute forcescholars must sort through mountains of literature to perceive the outlines of their field. Obviously, this is time-consuming, difficult to replicate, and entails subjective judgments. The task is enormously complex. Sifting through recently published documents to find those that will later be recognized as important is labor intensive. Traditional approaches struggle to keep up with the pace of information growth. In multidisciplinary fields of study it is especially difficult to maintain an overview of literature dynamics. Painting the big picture of an everevolving scientific discipline is akin to the situation described in the widely known Indian legend about the blind men and the elephant. As the story goes, six blind men were trying to find out what an elephant looked like. They touched different parts of the elephant and quickly jumped to their conclusions. The one touching the body said it must be like a wall; the one touching the tail said it was like a snake; the one touching the legs said it was like a tree trunk, and so forth. But science does not stand still; the steady stream of new scientific literature creates a continuously changing structure. The resulting disappearance, fusion, and emergence of research areas add another twist to the tale-it is as if the elephant is running and dynamically changing its shape. Domain visualization, an emerging field of study, is in a similar situation. Relevant literature is spread across disciplines that have traditionally had few connections. Researchers examining the domain from a particular discipline cannot possibly have an adequate understanding of the whole. As noted by White and McCain (1997), the new generation of information scientists is technically driven in its efforts to visualize scientific disciplines. However, limited progress has been made in terms of connecting pioneers' theories and practices with the potentialities of today's enabling technologies. If the difference between past and present generations lies in the power of available technologies, what they have in common is the ultimate goal-to reveal the development of scientific knowledge.
  17. Hsueh, D.C.: Recon road maps : retrospective conversion literature, 1980-1990 (1992) 0.02
    0.017295016 = product of:
      0.069180064 = sum of:
        0.069180064 = weight(_text_:22 in 2193) [ClassicSimilarity], result of:
          0.069180064 = score(doc=2193,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.30952093 = fieldWeight in 2193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=2193)
      0.25 = coord(1/4)
    
    Source
    Cataloging and classification quarterly. 14(1992) nos.3/4, S.5-22
  18. Gabbard, R.: Recent literature shows accelerated growth in hypermedia tools : an annotated bibliography (1994) 0.02
    0.017295016 = product of:
      0.069180064 = sum of:
        0.069180064 = weight(_text_:22 in 8460) [ClassicSimilarity], result of:
          0.069180064 = score(doc=8460,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.30952093 = fieldWeight in 8460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=8460)
      0.25 = coord(1/4)
    
    Source
    Reference services review. 22(1994) no.2, S.31-40
  19. Buckland, M.K.; Liu, Z.: History of information science (1995) 0.02
    0.017295016 = product of:
      0.069180064 = sum of:
        0.069180064 = weight(_text_:22 in 4226) [ClassicSimilarity], result of:
          0.069180064 = score(doc=4226,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.30952093 = fieldWeight in 4226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4226)
      0.25 = coord(1/4)
    
    Date
    13. 6.1996 19:22:20
  20. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.02
    0.017295016 = product of:
      0.069180064 = sum of:
        0.069180064 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
          0.069180064 = score(doc=7415,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.30952093 = fieldWeight in 7415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly