Search (8 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × theme_ss:"Literaturübersicht"
  • × type_ss:"a"
  1. Auer, N.J.: Bibliography on evaluating Internet resources (1998) 0.01
    0.012573673 = product of:
      0.03772102 = sum of:
        0.03772102 = weight(_text_:on in 3528) [ClassicSimilarity], result of:
          0.03772102 = score(doc=3528,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 3528, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=3528)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a bibliography on evaluating Internet resources in which titles are arranged under the following headings: Internet resources, print resources, and useful listservs
  2. Herring, S.C.: Computer-mediated communication on the Internet (2002) 0.01
    0.0124473 = product of:
      0.0373419 = sum of:
        0.0373419 = weight(_text_:on in 5323) [ClassicSimilarity], result of:
          0.0373419 = score(doc=5323,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.34020463 = fieldWeight in 5323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=5323)
      0.33333334 = coord(1/3)
    
  3. Woodward, J.: Cataloging and classifying information resources on the Internet (1996) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 7397) [ClassicSimilarity], result of:
          0.02263261 = score(doc=7397,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 7397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=7397)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review exploring the problem of bibliographic citations to resources that exist only in electronic form where the cited items may no longer be locatable at the URL indicated. Notes that the Internet is currently in a state of near chaos in terms of access and organization, while searching, usually performed with word based search engines, is generally not adequate for the needs of most users. Reviews strategies used by librarians for cataloguing and classifying information resources on the Internet. Techniques used include: automatic classification projects and classified subject trees, like the BUBL Subject Tree; CyberDewey, and the WWW Virtual Library. Considers OPAC like library catalogues such as the UK's CATRIONA Project and OCLC's InterCat. Explores retrieval tools used with concept analysis and other non traditional proposals, which include some library expertise, usually the use of one of the major library classifications. Pays particular attention to the UDC
  4. Sugimoto, C.R.; Work, S.; Larivière, V.; Haustein, S.: Scholarly use of social media and altmetrics : A review of the literature (2017) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 3781) [ClassicSimilarity], result of:
          0.02263261 = score(doc=3781,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 3781, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
      0.33333334 = coord(1/3)
    
    Abstract
    Social media has become integrated into the fabric of the scholarly communication system in fundamental ways, principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics-that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of 2 main parts: the first examines the use of social media in academia, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system.
  5. El-Sherbini, M.: Selected cataloging tools on the Internet (2003) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 1997) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1997,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=1997)
      0.33333334 = coord(1/3)
    
  6. Yang, K.: Information retrieval on the Web (2004) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 4278) [ClassicSimilarity], result of:
          0.021338228 = score(doc=4278,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 4278, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=4278)
      0.33333334 = coord(1/3)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  7. Braman, S.: Policy for the net and the Internet (1995) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 4544) [ClassicSimilarity], result of:
          0.016003672 = score(doc=4544,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 4544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4544)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review of the Net (the global telecommunications network as a whole) and the Internet with particular reference to the development of a coherent policy for those uisng these telecommunications facilities. Policy issues discussed include: standards, intellectual property; encryption, rules for transborder data flow; and data privacy. Considers their implications for individuals as well as government and commercial institutions. The review is limited to English language publications and explores specific issues that affect the structure of government, the economy and society, as well as those involved in the design of the net and looks at comparative and international issues. Concludes that the development of policies for the net is made difficult by the many different bodies of law that apply, by the fact that the relevant technologies are new and changing because that technologies are new and rapidly changing and because the net is global. Specific characteristics of the net require new thinking on a constitutional level, since information creation, processing, flows and use are constitutive forces in society
  8. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 4279) [ClassicSimilarity], result of:
          0.013336393 = score(doc=4279,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.33333334 = coord(1/3)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.