Search (6 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × theme_ss:"Literaturübersicht"
  1. Chowdhury, G.G.: ¬The Internet and information retrieval research : a brief review (1999) 0.01
    0.013297176 = product of:
      0.026594352 = sum of:
        0.026594352 = product of:
          0.053188704 = sum of:
            0.053188704 = weight(_text_:research in 3424) [ClassicSimilarity], result of:
              0.053188704 = score(doc=3424,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.35662293 = fieldWeight in 3424, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3424)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Internet and related information services attract increasing interest from information retrieval researchers. A survey of recent publications shows that frequent topics are the effectiveness of search engines, information validation and quality, user studies, design of user interfaces, data structures and metadata, classification and vocabulary based aids, and indexing and search agents. Current research in these areas is briefly discussed. The changing balance between CD-ROM sources and traditional online searching is quite important and is noted
  2. Yang, K.: Information retrieval on the Web (2004) 0.01
    0.010512341 = product of:
      0.021024682 = sum of:
        0.021024682 = product of:
          0.042049363 = sum of:
            0.042049363 = weight(_text_:research in 4278) [ClassicSimilarity], result of:
              0.042049363 = score(doc=4278,freq=10.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2819352 = fieldWeight in 4278, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4278)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    How do we find information an the Web? Although information on the Web is distributed and decentralized, the Web can be viewed as a single, virtual document collection. In that regard, the fundamental questions and approaches of traditional information retrieval (IR) research (e.g., term weighting, query expansion) are likely to be relevant in Web document retrieval. Findings from traditional IR research, however, may not always be applicable in a Web setting. The Web document collection - massive in size and diverse in content, format, purpose, and quality - challenges the validity of previous research findings that are based an relatively small and homogeneous test collections. Moreover, some traditional IR approaches, although applicable in theory, may be impossible or impractical to implement in a Web setting. For instance, the size, distribution, and dynamic nature of Web information make it extremely difficult to construct a complete and up-to-date data representation of the kind required for a model IR system. To further complicate matters, information seeking on the Web is diverse in character and unpredictable in nature. Web searchers come from all walks of life and are motivated by many kinds of information needs. The wide range of experience, knowledge, motivation, and purpose means that searchers can express diverse types of information needs in a wide variety of ways with differing criteria for satisfying those needs. Conventional evaluation measures, such as precision and recall, may no longer be appropriate for Web IR, where a representative test collection is all but impossible to construct. Finding information on the Web creates many new challenges for, and exacerbates some old problems in, IR research. At the same time, the Web is rich in new types of information not present in most IR test collections. Hyperlinks, usage statistics, document markup tags, and collections of topic hierarchies such as Yahoo! (http://www.yahoo.com) present an opportunity to leverage Web-specific document characteristics in novel ways that go beyond the term-based retrieval framework of traditional IR. Consequently, researchers in Web IR have reexamined the findings from traditional IR research.
  3. Sugimoto, C.R.; Work, S.; Larivière, V.; Haustein, S.: Scholarly use of social media and altmetrics : A review of the literature (2017) 0.01
    0.009972882 = product of:
      0.019945765 = sum of:
        0.019945765 = product of:
          0.03989153 = sum of:
            0.03989153 = weight(_text_:research in 3781) [ClassicSimilarity], result of:
              0.03989153 = score(doc=3781,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2674672 = fieldWeight in 3781, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3781)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social media has become integrated into the fabric of the scholarly communication system in fundamental ways, principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics-that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of 2 main parts: the first examines the use of social media in academia, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system.
  4. Liu, L.-G.: ¬The Internet and library and information services : a review, analysis, and annotated bibliography (1995) 0.01
    0.008227208 = product of:
      0.016454415 = sum of:
        0.016454415 = product of:
          0.03290883 = sum of:
            0.03290883 = weight(_text_:research in 4097) [ClassicSimilarity], result of:
              0.03290883 = score(doc=4097,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22064918 = fieldWeight in 4097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4097)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reviews the literature of the Internet and WWW, since 1990, covering 446 references on the Internet and library and information services with particular reference to issues such as: academic libraries and scholarly research; collection development and cooperation; community colleges and networks; electronic publishing; document delivery and interloans; global and international networking; government information; Internet training; legal, ethical and security issues; OPACs; privatization and commercialization; public libraries; reference services; school libraries; special libraries; standards and protocols; and women, minorities, disabled and equality
  5. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.01
    0.007051893 = product of:
      0.014103786 = sum of:
        0.014103786 = product of:
          0.028207572 = sum of:
            0.028207572 = weight(_text_:research in 4242) [ClassicSimilarity], result of:
              0.028207572 = score(doc=4242,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.18912788 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  6. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.005876578 = product of:
      0.011753156 = sum of:
        0.011753156 = product of:
          0.023506312 = sum of:
            0.023506312 = weight(_text_:research in 4279) [ClassicSimilarity], result of:
              0.023506312 = score(doc=4279,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.15760657 = fieldWeight in 4279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.